modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
FacebookAI/xlm-mlm-ende-1024
|
FacebookAI
| 2023-07-11T14:46:38Z | 366 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"de",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- multilingual
- en
- de
license: cc-by-nc-4.0
---
# xlm-mlm-ende-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-ende-1024 is a transformer pretrained using a masked language modeling (MLM) objective for English-German. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English-German
- **License:** CC-BY-NC-4.0
- **Related Models:** [xlm-clm-enfr-1024](https://huggingface.co/xlm-clm-enfr-1024), [xlm-clm-ende-1024](https://huggingface.co/xlm-clm-ende-1024), [xlm-mlm-enfr-1024](https://huggingface.co/xlm-mlm-enfr-1024), [xlm-mlm-enro-1024](https://huggingface.co/xlm-mlm-enro-1024)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo](https://github.com/facebookresearch/XLM)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for masked language modeling.
## Downstream Use
To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask) and the [Hugging Face Multilingual Models for Inference](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) docs.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training
The model developers write:
> In all experiments, we use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam op- timizer (Kingma and Ba, 2014), a linear warm- up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for links, citations, and further details on the training data and training procedure.
The model developers also write that:
> If you use these models, you should use the same data preprocessing / BPE codes to preprocess your data.
See the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
# Evaluation
## Testing Data, Factors & Metrics
The model developers evaluated the model on the [WMT'16 English-German](https://huggingface.co/datasets/wmt16) dataset using the [BLEU metric](https://huggingface.co/spaces/evaluate-metric/bleu). See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details on the testing data, factors and metrics.
## Results
For xlm-mlm-ende-1024 results, see Table 1 and Table 2 of the [associated paper](https://arxiv.org/pdf/1901.07291.pdf).
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
The model developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
More information needed. This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
|
1aurent/Pixelcopter-PLE-v0
|
1aurent
| 2023-07-11T14:46:11Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T11:56:47Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 41.80 +/- 32.43
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
FacebookAI/xlm-mlm-tlm-xnli15-1024
|
FacebookAI
| 2023-07-11T14:45:55Z | 252 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"xlm",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"arxiv:1901.07291",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
---
language:
- multilingual
- en
- fr
- es
- de
- el
- bg
- ru
- tr
- ar
- vi
- th
- zh
- hi
- sw
- ur
license: cc-by-nc-4.0
---
# xlm-mlm-tlm-xnli15-1024
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
4. [Training Details](#training-details)
5. [Evaluation](#evaluation)
6. [Environmental Impact](#environmental-impact)
7. [Technical Specifications](#technical-specifications)
8. [Citation](#citation)
9. [Model Card Authors](#model-card-authors)
10. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
The XLM model was proposed in [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample, Alexis Conneau. xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. The model developers evaluated the capacity of the model to make correct predictions in all 15 XNLI languages (see the [XNLI data card](https://huggingface.co/datasets/xnli) for further information on XNLI).
## Model Description
- **Developed by:** Guillaume Lample, Alexis Conneau, see [associated paper](https://arxiv.org/abs/1901.07291)
- **Model type:** Language model
- **Language(s) (NLP):** English; evaluated in 15 languages (see [XNLI data card](https://huggingface.co/datasets/xnli))
- **License:** CC-BY-NC-4.0
- **Related Models:** [XLM models](https://huggingface.co/models?sort=downloads&search=xlm)
- **Resources for more information:**
- [Associated paper](https://arxiv.org/abs/1901.07291)
- [GitHub Repo for XLM](https://github.com/facebookresearch/XLM)
- [GitHub Repo for XNLI](https://github.com/facebookresearch/XNLI)
- [XNLI data card](https://huggingface.co/datasets/xnli)
- [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings)
# Uses
## Direct Use
The model is a language model. The model can be used for cross-lingual text classification. Though the model is fine-tuned based on English text data, the model's ability to classify sentences in 14 other languages has been evaluated (see [Evaluation](#evaluation)).
## Downstream Use
This model can be used for downstream tasks related to natural language inference in different languages. For more information, see the [associated paper](https://arxiv.org/abs/1901.07291).
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
# Training Details
Training details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Training Data
The model developers write:
> We use WikiExtractor2 to extract raw sentences from Wikipedia dumps and use them as mono-lingual data for the CLM and MLM objectives. For the TLM objective, we only use parallel data that involves English, similar to Conneau et al. (2018b).
> - Precisely, we use MultiUN (Ziemski et al., 2016) for French, Spanish, Russian, Arabic and Chinese, and the IIT Bombay corpus (Anoop et al., 2018) for Hindi.
> - We extract the following corpora from the OPUS 3 website Tiedemann (2012): the EUbookshop corpus for German, Greek and Bulgarian, OpenSubtitles 2018 for Turkish, Vietnamese and Thai, Tanzil for both Urdu and Swahili and GlobalVoices for Swahili.
> - For Chinese, Japanese and Thai we use the tokenizer of Chang et al. (2008), the Kytea4 tokenizer, and the PyThaiNLP5 tokenizer respectively.
> - For all other languages, we use the tokenizer provided by Moses (Koehn et al., 2007), falling back on the default English tokenizer when necessary.
For fine-tuning, the developers used the English NLI dataset (see the [XNLI data card](https://huggingface.co/datasets/xnli)).
## Training Procedure
### Preprocessing
The model developers write:
> We use fastBPE to learn BPE codes and split words into subword units. The BPE codes are learned on the concatenation of sentences sampled from all languages, following the method presented in Section 3.1.
### Speeds, Sizes, Times
The model developers write:
> We use a Transformer architecture with 1024 hidden units, 8 heads, GELU activations (Hendrycks and Gimpel, 2016), a dropout rate of 0.1 and learned positional embeddings. We train our models with the Adam optimizer (Kingma and Ba, 2014), a linear warm-up (Vaswani et al., 2017) and learning rates varying from 10^−4 to 5.10^−4.
>
> For the CLM and MLM objectives, we use streams of 256 tokens and a mini-batches of size 64. Unlike Devlin et al. (2018), a sequence in a mini-batch can contain more than two consecutive sentences, as explained in Section 3.2. For the TLM objective, we sample mini-batches of 4000 tokens composed of sentences with similar lengths. We use the averaged perplexity over languages as a stopping criterion for training. For machine translation, we only use 6 layers, and we create mini-batches of 2000 tokens.
>
> When fine-tuning on XNLI, we use mini-batches of size 8 or 16, and we clip the sentence length to 256 words. We use 80k BPE splits and a vocabulary of 95k and train a 12-layer model on the Wikipedias of the XNLI languages. We sample the learning rate of the Adam optimizer with values from 5.10−4 to 2.10−4, and use small evaluation epochs of 20000 random samples. We use the first hidden state of the last layer of the transformer as input to the randomly initialized final linear classifier, and fine-tune all parameters. In our experiments, using either max-pooling or mean-pooling over the last layer did not work bet- ter than using the first hidden state.
>
> We implement all our models in Py-Torch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Evaluation
## Testing Data, Factors & Metrics
After fine-tuning the model on the English NLI dataset, the model developers evaluated the capacity of the model to make correct predictions in the 15 XNLI languages using the XNLI data and the metric of test accuracy.See the [associated paper](https://arxiv.org/pdf/1901.07291.pdf) for further details.
## Results
|Language| en | fr | es | de | el | bg | ru | tr | ar | vi | th | zh | hi | sw | ur |
|:------:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|:--:|:---:|:--:|:--:|:--:|
|Accuracy|85.0|78.7 |78.9|77.8|76.6|77.4|75.3 |72.5|73.1|76.1|73.2|76.5 |69.6|68.4|67.3|
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 64 Volta GPUs
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications
Details are culled from the [associated paper](https://arxiv.org/pdf/1901.07291.pdf). See the paper for links, citations, and further details. Also see the associated [GitHub Repo](https://github.com/facebookresearch/XLM#ii-cross-lingual-language-model-pretraining-xlm) for further details.
## Model Architecture and Objective
xlm-mlm-tlm-xnli15-1024 is a transformer pretrained using a masked language modeling (MLM) objective in combination with a translation language modeling (TLM) objective and then fine-tuned on the English NLI dataset. About the TLM objective, the developers write:
> We introduce a new translation language modeling (TLM) objective for improving cross-lingual pretraining. Our TLM objective is an extension of MLM, where instead of considering monolingual text streams, we concatenate parallel sentences as illustrated in Figure 1. We randomly mask words in both the source and target sentences. To predict a word masked in an English sentence, the model can either attend to surrounding English words or to the French translation, encouraging the model to align the English and French representations.
## Compute Infrastructure
### Hardware and Software
The developers write:
> We implement all our models in PyTorch (Paszke et al., 2017), and train them on 64 Volta GPUs for the language modeling tasks, and 8 GPUs for the MT tasks. We use float16 operations to speed up training and to reduce the memory usage of our models.
# Citation
**BibTeX:**
```bibtex
@article{lample2019cross,
title={Cross-lingual language model pretraining},
author={Lample, Guillaume and Conneau, Alexis},
journal={arXiv preprint arXiv:1901.07291},
year={2019}
}
```
**APA:**
- Lample, G., & Conneau, A. (2019). Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291.
# Model Card Authors
This model card was written by the team at Hugging Face.
# How to Get Started with the Model
This model uses language embeddings to specify the language used at inference. See the [Hugging Face Multilingual Models for Inference docs](https://huggingface.co/docs/transformers/v4.20.1/en/multilingual#xlm-with-language-embeddings) for further details.
|
turkish-nlp-suite/tr_vectors_web_md
|
turkish-nlp-suite
| 2023-07-11T14:42:20Z | 0 | 0 |
spacy
|
[
"spacy",
"floret",
"fasttext",
"feature-extraction",
"token-classification",
"tr",
"arxiv:1910.10683",
"doi:10.57967/hf/0085",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] |
token-classification
| 2022-11-02T17:22:50Z |
---
tags:
- spacy
- floret
- fasttext
- feature-extraction
- token-classification
language:
- tr
license: cc-by-sa-4.0
model-index:
- name: tr_vectors_web_md
results:
- task:
name: NMT
type: token-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1112
---
Medium sized Turkish Floret word vectors for spaCy.
The vectors are trained on MC4 corpus using Floret with the follwing hyperparameters:
```
floret cbow -dim 300 --mode floret --bucket 50000 -minn 4 -maxn5 -minCount 100
-neg 10 -hashCount 2 -thread 12 -epoch 5
```
Vector are published in Floret format.
| Feature | Description |
| --- | --- |
| **Name** | `tr_vectors_web_md` |
| **Version** | `1.0` |
| **Vectors** | 50000 keys (300 dimensions) |
| **Sources** | [MC4](https://arxiv.org/abs/1910.10683) |
| **License** | `cc-by-sa-4.0` |
| **Author** | [Duygu Altinok](https://www.onlyduygu.com/) |
---
If you'd like to use the vectors in your own work, please kindly cite the paper [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/):
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
soduhh/bert-finetuned-squad-kaggle
|
soduhh
| 2023-07-11T14:37:34Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T10:59:09Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: soduhh/bert-finetuned-squad-kaggle
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# soduhh/bert-finetuned-squad-kaggle
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5676
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 16635, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2641 | 0 |
| 0.7755 | 1 |
| 0.5676 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
gbellamy/rl_course_vizdoom_health_gathering_supreme
|
gbellamy
| 2023-07-11T14:31:42Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T14:31:32Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 11.87 +/- 4.95
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r gbellamy/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
vives/coloringbook-safetensors
|
vives
| 2023-07-11T14:28:31Z | 0 | 0 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-07-11T05:42:44Z |
---
license: gpl-3.0
---
Is simply a copy of https://huggingface.co/MrHup/coloring-book but with converted to safetensor format
|
edbeeching/atari_2B_atari_surround_1111
|
edbeeching
| 2023-07-11T14:26:10Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T14:25:47Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_surround
type: atari_surround
metrics:
- type: mean_reward
value: nan +/- nan
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_surround** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r edbeeching/atari_2B_atari_surround_1111
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m <path.to.enjoy.module> --algo=APPO --env=atari_surround --train_dir=./train_dir --experiment=atari_2B_atari_surround_1111
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m <path.to.train.module> --algo=APPO --env=atari_surround --train_dir=./train_dir --experiment=atari_2B_atari_surround_1111 --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
Jorgeutd/finetunepeftmodel
|
Jorgeutd
| 2023-07-11T14:21:56Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T14:20:39Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
ericNguyen0132/roberta-large-Dep
|
ericNguyen0132
| 2023-07-11T14:20:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-02T12:57:45Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta-large-Dep
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-Dep
This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8107
- Accuracy: 0.8517
- F1: 0.9118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 469 | 0.3701 | 0.87 | 0.9264 |
| 0.4293 | 2.0 | 938 | 0.4385 | 0.865 | 0.9219 |
| 0.3302 | 3.0 | 1407 | 0.5293 | 0.85 | 0.9109 |
| 0.2784 | 4.0 | 1876 | 0.7077 | 0.8517 | 0.9118 |
| 0.1914 | 5.0 | 2345 | 0.8107 | 0.8517 | 0.9118 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jed351/whisper-large-v2-LORA-zh-HK
|
jed351
| 2023-07-11T14:13:39Z | 0 | 0 | null |
[
"zh",
"dataset:mozilla-foundation/common_voice_11_0",
"region:us"
] | null | 2023-06-28T23:03:17Z |
---
datasets:
- mozilla-foundation/common_voice_11_0
language:
- zh
---
# whisper-large-v2-LORA
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 zh-HK dataset.
Trained by following the instructions in this [notebook](https://colab.research.google.com/drive/1DOkD_5OUjFa0r5Ik3SgywJLJtEo2qLxO?usp=sharing) for around 3 epochs.
Inference instructions could be found in my [notebook](https://colab.research.google.com/github/jedcheng/cantonese_langauge_model/blob/main/whisper_large_int8_inference.ipynb)
|
NasimB/gpt2-concat-all-mod-datasets2-rarity-all-2k-13k
|
NasimB
| 2023-07-11T14:09:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T12:31:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-mod-datasets2-rarity-all-2k-13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-mod-datasets2-rarity-all-2k-13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3970
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7579 | 0.32 | 500 | 5.6615 |
| 5.4331 | 0.65 | 1000 | 5.2307 |
| 5.0836 | 0.97 | 1500 | 4.9878 |
| 4.7951 | 1.3 | 2000 | 4.8377 |
| 4.6653 | 1.62 | 2500 | 4.7210 |
| 4.5581 | 1.95 | 3000 | 4.6204 |
| 4.3268 | 2.27 | 3500 | 4.5639 |
| 4.265 | 2.59 | 4000 | 4.4993 |
| 4.2249 | 2.92 | 4500 | 4.4377 |
| 4.0055 | 3.24 | 5000 | 4.4311 |
| 3.9473 | 3.57 | 5500 | 4.3864 |
| 3.9214 | 3.89 | 6000 | 4.3449 |
| 3.7308 | 4.22 | 6500 | 4.3569 |
| 3.6538 | 4.54 | 7000 | 4.3429 |
| 3.6342 | 4.86 | 7500 | 4.3310 |
| 3.529 | 5.19 | 8000 | 4.3390 |
| 3.4474 | 5.51 | 8500 | 4.3426 |
| 3.4445 | 5.84 | 9000 | 4.3400 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
tejeshbhalla/falcon_7b_finetuned
|
tejeshbhalla
| 2023-07-11T14:07:45Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T14:06:47Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
vk21/a2c-AntBulletEnv-v0-unit6
|
vk21
| 2023-07-11T14:05:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-10T23:04:56Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1513.13 +/- 249.07
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
hopkins/svo4
|
hopkins
| 2023-07-11T14:03:53Z | 137 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T13:59:24Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: svo4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# svo4
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 8.2940
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.9867 | 0.98 | 6 | 10.1727 |
| 9.9738 | 1.98 | 12 | 8.9923 |
| 8.7963 | 2.98 | 18 | 8.2940 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ashnrk/textual_inversion_river
|
ashnrk
| 2023-07-11T14:02:06Z | 18 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T12:59:59Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_river
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
SHENMU007/neunit_BASE_V11.1
|
SHENMU007
| 2023-07-11T13:50:00Z | 73 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-11T10:52:57Z |
---
language:
- zh
license: mit
base_model: microsoft/speecht5_tts
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
h2o-llmstudio/falcon-7b-fix
|
h2o-llmstudio
| 2023-07-11T13:46:34Z | 17 | 1 |
transformers
|
[
"transformers",
"pytorch",
"RefinedWebModel",
"text-generation",
"custom_code",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2205.14135",
"arxiv:1911.02150",
"arxiv:2101.00027",
"arxiv:2005.14165",
"arxiv:2104.09864",
"arxiv:2306.01116",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-06T09:48:00Z |
---
datasets:
- tiiuae/falcon-refinedweb
language:
- en
inference: false
license: apache-2.0
---
⚠️ **This is an unofficial fork of the original [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) model.**
The following changes have been made:
- Fixing generation configuration setting
- Model now properly uses specified ```attention_mask``` when calling ```scaled_dot_product_attention``` - this also allows to specify custom attention masks and work with left-padded input. However, this will disable additional memory and flash optimization.
# 🚀 Falcon-7B
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
*Paper coming soon* 😊.
🤗 To get started with Falcon (inference, finetuning, quantization, etc.), we recommend reading [this great blogpost fron HF](https://huggingface.co/blog/falcon)!
## Why use Falcon-7B?
* **It outperforms comparable open-source models** (e.g., [MPT-7B](https://huggingface.co/mosaicml/mpt-7b), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1) etc.), thanks to being trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-7B-Instruct](https://huggingface.co/tiiuae/falcon-7b-instruct).
🔥 **Looking for an even more powerful model?** [Falcon-40B](https://huggingface.co/tiiuae/falcon-40b) is Falcon-7B's big brother!
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
💥 **Falcon LLMs require PyTorch 2.0 for use with `transformers`!**
For fast inference with Falcon, check-out [Text Generation Inference](https://github.com/huggingface/text-generation-inference)! Read more in this [blogpost]((https://huggingface.co/blog/falcon).
You will need **at least 16GB of memory** to swiftly run inference with Falcon-7B.
# Model Card for Falcon-7B
## Model Details
### Model Description
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
- **Model type:** Causal decoder-only;
- **Language(s) (NLP):** English and French;
- **License:** Apache 2.0.
### Model Source
- **Paper:** *coming soon*.
## Uses
### Direct Use
Research on large language models; as a foundation for further specialization and finetuning for specific usecases (e.g., summarization, text generation, chatbot, etc.)
### Out-of-Scope Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
Falcon-7B is trained on English and French data only, and will not generalize appropriately to other languages. Furthermore, as it is trained on a large-scale corpora representative of the web, it will carry the stereotypes and biases commonly encountered online.
### Recommendations
We recommend users of Falcon-7B to consider finetuning it for the specific set of tasks of interest, and for guardrails and appropriate precautions to be taken for any production use.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
Falcon-7B was trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. Significant components from our curated copora were inspired by The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)).
| **Data source** | **Fraction** | **Tokens** | **Sources** |
|--------------------|--------------|------------|-----------------------------------|
| [RefinedWeb-English](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | 79% | 1,185B | massive web crawl |
| Books | 7% | 110B | |
| Conversations | 6% | 85B | Reddit, StackOverflow, HackerNews |
| Code | 3% | 45B | |
| RefinedWeb-French | 3% | 45B | massive web crawl |
| Technical | 2% | 30B | arXiv, PubMed, USPTO, etc. |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b) tokenizer.
### Training Procedure
Falcon-7B was trained on 384 A100 40GB GPUs, using a 2D parallelism strategy (PP=2, DP=192) combined with ZeRO.
#### Training Hyperparameters
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|------------|-------------------------------------------|
| Precision | `bfloat16` | |
| Optimizer | AdamW | |
| Learning rate | 6e-4 | 4B tokens warm-up, cosine decay to 1.2e-5 |
| Weight decay | 1e-1 | |
| Z-loss | 1e-4 | |
| Batch size | 2304 | 30B tokens ramp-up |
#### Speeds, Sizes, Times
Training happened in early March 2023 and took about two weeks.
## Evaluation
*Paper coming soon*.
See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) for early results.
## Technical Specifications
### Model Architecture and Objective
Falcon-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135));
* **Decoder-block:** parallel attention/MLP with a single layer norm.
| **Hyperparameter** | **Value** | **Comment** |
|--------------------|-----------|----------------------------------------|
| Layers | 32 | |
| `d_model` | 4544 | Increased to compensate for multiquery |
| `head_dim` | 64 | Reduced to optimise for FlashAttention |
| Vocabulary | 65024 | |
| Sequence length | 2048 | |
### Compute Infrastructure
#### Hardware
Falcon-7B was trained on AWS SageMaker, on 384 A100 40GB GPUs in P4d instances.
#### Software
Falcon-7B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO and high-performance Triton kernels (FlashAttention, etc.)
## Citation
*Paper coming soon* 😊. In the meanwhile, you can use the following information to cite:
```
@article{falcon40b,
title={{Falcon-40B}: an open large language model with state-of-the-art performance},
author={Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme},
year={2023}
}
```
To learn more about the pretraining dataset, see the 📓 [RefinedWeb paper](https://arxiv.org/abs/2306.01116).
```
@article{refinedweb,
title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
journal={arXiv preprint arXiv:2306.01116},
eprint={2306.01116},
eprinttype = {arXiv},
url={https://arxiv.org/abs/2306.01116},
year={2023}
}
```
## License
Falcon-7B is made available under the Apache 2.0 license.
## Contact
[email protected]
|
AdiOO7/gpt-neox-bank-complaints
|
AdiOO7
| 2023-07-11T13:41:20Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T13:41:18Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
sjdata/speecht5_finetuned_voxpopuli_nl
|
sjdata
| 2023-07-11T13:37:35Z | 75 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2023-07-11T11:50:21Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5217 | 4.3 | 1000 | 0.4827 |
| 0.4955 | 8.61 | 2000 | 0.4678 |
| 0.4936 | 12.91 | 3000 | 0.4666 |
| 0.4936 | 17.21 | 4000 | 0.4626 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CE
|
jordyvl
| 2023-07-11T13:13:07Z | 167 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T11:43:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CE
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7493
- Accuracy: 0.6275
- Brier Loss: 0.5677
- Nll: 2.9769
- F1 Micro: 0.6275
- F1 Macro: 0.6250
- Ece: 0.2161
- Aurc: 0.1599
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 7 | 3.0120 | 0.085 | 0.9612 | 10.2074 | 0.085 | 0.0737 | 0.1585 | 0.8946 |
| No log | 2.0 | 14 | 2.6380 | 0.12 | 0.9247 | 6.9313 | 0.12 | 0.1257 | 0.1753 | 0.8678 |
| No log | 3.0 | 21 | 2.1951 | 0.36 | 0.7931 | 5.6390 | 0.36 | 0.3256 | 0.2066 | 0.4054 |
| No log | 4.0 | 28 | 1.8405 | 0.445 | 0.6971 | 3.5387 | 0.445 | 0.4479 | 0.1889 | 0.3068 |
| No log | 5.0 | 35 | 1.6213 | 0.525 | 0.6244 | 3.3423 | 0.525 | 0.5188 | 0.1821 | 0.2477 |
| No log | 6.0 | 42 | 1.5983 | 0.5275 | 0.6177 | 3.1202 | 0.5275 | 0.5220 | 0.1781 | 0.2448 |
| No log | 7.0 | 49 | 1.6214 | 0.54 | 0.6243 | 3.2514 | 0.54 | 0.5248 | 0.1758 | 0.2535 |
| No log | 8.0 | 56 | 1.4964 | 0.5675 | 0.5862 | 2.6168 | 0.5675 | 0.5715 | 0.1585 | 0.2200 |
| No log | 9.0 | 63 | 1.5696 | 0.575 | 0.5893 | 2.9901 | 0.575 | 0.5729 | 0.1851 | 0.2123 |
| No log | 10.0 | 70 | 1.6620 | 0.54 | 0.6257 | 3.1275 | 0.54 | 0.5425 | 0.2353 | 0.2343 |
| No log | 11.0 | 77 | 1.6901 | 0.585 | 0.5967 | 3.1708 | 0.585 | 0.5753 | 0.2006 | 0.1900 |
| No log | 12.0 | 84 | 1.5686 | 0.61 | 0.5645 | 2.9975 | 0.61 | 0.6129 | 0.1904 | 0.1830 |
| No log | 13.0 | 91 | 1.7390 | 0.5675 | 0.6159 | 3.0248 | 0.5675 | 0.5574 | 0.2200 | 0.2016 |
| No log | 14.0 | 98 | 1.6423 | 0.59 | 0.5778 | 2.9212 | 0.59 | 0.5827 | 0.2015 | 0.1863 |
| No log | 15.0 | 105 | 1.6262 | 0.61 | 0.5630 | 2.9492 | 0.61 | 0.6074 | 0.1950 | 0.1700 |
| No log | 16.0 | 112 | 1.6987 | 0.5925 | 0.5791 | 3.0433 | 0.5925 | 0.5852 | 0.2123 | 0.1674 |
| No log | 17.0 | 119 | 1.7256 | 0.5975 | 0.5782 | 3.0657 | 0.5975 | 0.5929 | 0.2214 | 0.1713 |
| No log | 18.0 | 126 | 1.7127 | 0.6125 | 0.5697 | 2.9494 | 0.6125 | 0.6110 | 0.2044 | 0.1706 |
| No log | 19.0 | 133 | 1.6961 | 0.62 | 0.5627 | 2.8745 | 0.62 | 0.6181 | 0.1972 | 0.1666 |
| No log | 20.0 | 140 | 1.6784 | 0.6275 | 0.5565 | 2.9077 | 0.6275 | 0.6256 | 0.2005 | 0.1614 |
| No log | 21.0 | 147 | 1.6699 | 0.62 | 0.5549 | 2.9148 | 0.62 | 0.6189 | 0.2089 | 0.1598 |
| No log | 22.0 | 154 | 1.6705 | 0.62 | 0.5561 | 2.9207 | 0.62 | 0.6186 | 0.2036 | 0.1593 |
| No log | 23.0 | 161 | 1.6749 | 0.62 | 0.5576 | 2.8938 | 0.62 | 0.6172 | 0.2017 | 0.1594 |
| No log | 24.0 | 168 | 1.6811 | 0.62 | 0.5586 | 2.9303 | 0.62 | 0.6176 | 0.2064 | 0.1602 |
| No log | 25.0 | 175 | 1.6870 | 0.625 | 0.5595 | 2.9457 | 0.625 | 0.6225 | 0.1996 | 0.1600 |
| No log | 26.0 | 182 | 1.6905 | 0.625 | 0.5600 | 2.9438 | 0.625 | 0.6228 | 0.1957 | 0.1604 |
| No log | 27.0 | 189 | 1.6920 | 0.625 | 0.5601 | 2.9207 | 0.625 | 0.6228 | 0.2030 | 0.1603 |
| No log | 28.0 | 196 | 1.6928 | 0.6225 | 0.5596 | 2.9140 | 0.6225 | 0.6201 | 0.2104 | 0.1598 |
| No log | 29.0 | 203 | 1.6934 | 0.6225 | 0.5596 | 2.9133 | 0.6225 | 0.6201 | 0.2171 | 0.1597 |
| No log | 30.0 | 210 | 1.6952 | 0.6225 | 0.5600 | 2.9156 | 0.6225 | 0.6199 | 0.2175 | 0.1597 |
| No log | 31.0 | 217 | 1.6962 | 0.6225 | 0.5604 | 2.9195 | 0.6225 | 0.6199 | 0.2151 | 0.1597 |
| No log | 32.0 | 224 | 1.6982 | 0.625 | 0.5609 | 2.9466 | 0.625 | 0.6216 | 0.2052 | 0.1598 |
| No log | 33.0 | 231 | 1.6996 | 0.625 | 0.5610 | 2.9468 | 0.625 | 0.6220 | 0.2073 | 0.1598 |
| No log | 34.0 | 238 | 1.7008 | 0.625 | 0.5611 | 2.9223 | 0.625 | 0.6220 | 0.2099 | 0.1595 |
| No log | 35.0 | 245 | 1.7028 | 0.625 | 0.5615 | 2.9159 | 0.625 | 0.6218 | 0.2062 | 0.1597 |
| No log | 36.0 | 252 | 1.7053 | 0.6275 | 0.5621 | 2.9154 | 0.6275 | 0.6246 | 0.2166 | 0.1598 |
| No log | 37.0 | 259 | 1.7078 | 0.625 | 0.5628 | 2.9132 | 0.625 | 0.6216 | 0.2113 | 0.1600 |
| No log | 38.0 | 266 | 1.7098 | 0.6275 | 0.5631 | 2.9119 | 0.6275 | 0.6243 | 0.2209 | 0.1601 |
| No log | 39.0 | 273 | 1.7112 | 0.625 | 0.5632 | 2.9136 | 0.625 | 0.6221 | 0.2164 | 0.1604 |
| No log | 40.0 | 280 | 1.7122 | 0.625 | 0.5633 | 2.9183 | 0.625 | 0.6221 | 0.2206 | 0.1603 |
| No log | 41.0 | 287 | 1.7134 | 0.6275 | 0.5635 | 2.9473 | 0.6275 | 0.6247 | 0.2192 | 0.1602 |
| No log | 42.0 | 294 | 1.7142 | 0.625 | 0.5636 | 2.9477 | 0.625 | 0.6220 | 0.2172 | 0.1600 |
| No log | 43.0 | 301 | 1.7152 | 0.6275 | 0.5634 | 2.9471 | 0.6275 | 0.6245 | 0.2090 | 0.1598 |
| No log | 44.0 | 308 | 1.7160 | 0.6275 | 0.5634 | 2.9175 | 0.6275 | 0.6245 | 0.2074 | 0.1597 |
| No log | 45.0 | 315 | 1.7172 | 0.6275 | 0.5637 | 2.9171 | 0.6275 | 0.6245 | 0.2138 | 0.1597 |
| No log | 46.0 | 322 | 1.7188 | 0.63 | 0.5640 | 2.9184 | 0.63 | 0.6272 | 0.2138 | 0.1597 |
| No log | 47.0 | 329 | 1.7204 | 0.63 | 0.5642 | 2.9171 | 0.63 | 0.6277 | 0.2146 | 0.1599 |
| No log | 48.0 | 336 | 1.7220 | 0.63 | 0.5643 | 2.9178 | 0.63 | 0.6277 | 0.2088 | 0.1599 |
| No log | 49.0 | 343 | 1.7233 | 0.6325 | 0.5643 | 2.9162 | 0.6325 | 0.6296 | 0.2114 | 0.1597 |
| No log | 50.0 | 350 | 1.7244 | 0.6325 | 0.5644 | 2.9149 | 0.6325 | 0.6296 | 0.2117 | 0.1598 |
| No log | 51.0 | 357 | 1.7253 | 0.6325 | 0.5645 | 2.9168 | 0.6325 | 0.6296 | 0.2078 | 0.1597 |
| No log | 52.0 | 364 | 1.7260 | 0.63 | 0.5647 | 2.9198 | 0.63 | 0.6271 | 0.2002 | 0.1598 |
| No log | 53.0 | 371 | 1.7268 | 0.63 | 0.5649 | 2.9230 | 0.63 | 0.6270 | 0.2068 | 0.1596 |
| No log | 54.0 | 378 | 1.7271 | 0.6275 | 0.5649 | 2.9547 | 0.6275 | 0.6241 | 0.2019 | 0.1599 |
| No log | 55.0 | 385 | 1.7281 | 0.6275 | 0.5652 | 2.9814 | 0.6275 | 0.6241 | 0.2084 | 0.1599 |
| No log | 56.0 | 392 | 1.7293 | 0.6275 | 0.5652 | 2.9522 | 0.6275 | 0.6241 | 0.2086 | 0.1599 |
| No log | 57.0 | 399 | 1.7306 | 0.6275 | 0.5653 | 2.9227 | 0.6275 | 0.6244 | 0.2160 | 0.1600 |
| No log | 58.0 | 406 | 1.7315 | 0.6275 | 0.5654 | 2.9203 | 0.6275 | 0.6244 | 0.2140 | 0.1598 |
| No log | 59.0 | 413 | 1.7322 | 0.6275 | 0.5655 | 2.9190 | 0.6275 | 0.6244 | 0.2229 | 0.1600 |
| No log | 60.0 | 420 | 1.7333 | 0.6275 | 0.5657 | 2.9184 | 0.6275 | 0.6250 | 0.2150 | 0.1600 |
| No log | 61.0 | 427 | 1.7343 | 0.63 | 0.5658 | 2.9166 | 0.63 | 0.6273 | 0.2304 | 0.1599 |
| No log | 62.0 | 434 | 1.7351 | 0.63 | 0.5660 | 2.9230 | 0.63 | 0.6275 | 0.2154 | 0.1598 |
| No log | 63.0 | 441 | 1.7354 | 0.63 | 0.5660 | 2.9476 | 0.63 | 0.6275 | 0.2056 | 0.1597 |
| No log | 64.0 | 448 | 1.7359 | 0.63 | 0.5661 | 2.9483 | 0.63 | 0.6275 | 0.2050 | 0.1598 |
| No log | 65.0 | 455 | 1.7366 | 0.6275 | 0.5661 | 2.9515 | 0.6275 | 0.6250 | 0.2053 | 0.1600 |
| No log | 66.0 | 462 | 1.7371 | 0.6275 | 0.5661 | 2.9588 | 0.6275 | 0.6250 | 0.2110 | 0.1600 |
| No log | 67.0 | 469 | 1.7378 | 0.6275 | 0.5663 | 2.9780 | 0.6275 | 0.6250 | 0.2108 | 0.1599 |
| No log | 68.0 | 476 | 1.7384 | 0.6275 | 0.5663 | 2.9530 | 0.6275 | 0.6250 | 0.2150 | 0.1599 |
| No log | 69.0 | 483 | 1.7392 | 0.63 | 0.5663 | 2.9631 | 0.63 | 0.6275 | 0.2114 | 0.1596 |
| No log | 70.0 | 490 | 1.7398 | 0.63 | 0.5663 | 2.9778 | 0.63 | 0.6275 | 0.2129 | 0.1596 |
| No log | 71.0 | 497 | 1.7402 | 0.63 | 0.5664 | 2.9544 | 0.63 | 0.6275 | 0.2227 | 0.1596 |
| 0.1799 | 72.0 | 504 | 1.7408 | 0.63 | 0.5665 | 2.9521 | 0.63 | 0.6275 | 0.2157 | 0.1596 |
| 0.1799 | 73.0 | 511 | 1.7412 | 0.63 | 0.5666 | 2.9508 | 0.63 | 0.6275 | 0.2262 | 0.1596 |
| 0.1799 | 74.0 | 518 | 1.7417 | 0.63 | 0.5666 | 2.9509 | 0.63 | 0.6272 | 0.2248 | 0.1596 |
| 0.1799 | 75.0 | 525 | 1.7420 | 0.63 | 0.5666 | 2.9555 | 0.63 | 0.6272 | 0.2219 | 0.1596 |
| 0.1799 | 76.0 | 532 | 1.7425 | 0.63 | 0.5667 | 2.9541 | 0.63 | 0.6268 | 0.2233 | 0.1596 |
| 0.1799 | 77.0 | 539 | 1.7430 | 0.63 | 0.5668 | 2.9773 | 0.63 | 0.6276 | 0.2133 | 0.1596 |
| 0.1799 | 78.0 | 546 | 1.7435 | 0.63 | 0.5668 | 2.9772 | 0.63 | 0.6276 | 0.2134 | 0.1597 |
| 0.1799 | 79.0 | 553 | 1.7439 | 0.63 | 0.5669 | 2.9514 | 0.63 | 0.6276 | 0.2142 | 0.1596 |
| 0.1799 | 80.0 | 560 | 1.7444 | 0.6325 | 0.5669 | 2.9499 | 0.6325 | 0.6303 | 0.2118 | 0.1594 |
| 0.1799 | 81.0 | 567 | 1.7451 | 0.6325 | 0.5669 | 2.9506 | 0.6325 | 0.6303 | 0.2078 | 0.1594 |
| 0.1799 | 82.0 | 574 | 1.7455 | 0.6325 | 0.5670 | 2.9617 | 0.6325 | 0.6303 | 0.2079 | 0.1594 |
| 0.1799 | 83.0 | 581 | 1.7459 | 0.6325 | 0.5671 | 2.9766 | 0.6325 | 0.6303 | 0.2130 | 0.1594 |
| 0.1799 | 84.0 | 588 | 1.7463 | 0.63 | 0.5672 | 2.9770 | 0.63 | 0.6278 | 0.2085 | 0.1597 |
| 0.1799 | 85.0 | 595 | 1.7466 | 0.6275 | 0.5672 | 2.9768 | 0.6275 | 0.6250 | 0.2111 | 0.1598 |
| 0.1799 | 86.0 | 602 | 1.7469 | 0.63 | 0.5673 | 2.9769 | 0.63 | 0.6278 | 0.2086 | 0.1597 |
| 0.1799 | 87.0 | 609 | 1.7472 | 0.6275 | 0.5673 | 2.9770 | 0.6275 | 0.6250 | 0.2140 | 0.1598 |
| 0.1799 | 88.0 | 616 | 1.7474 | 0.6275 | 0.5674 | 2.9771 | 0.6275 | 0.6250 | 0.2111 | 0.1598 |
| 0.1799 | 89.0 | 623 | 1.7477 | 0.6275 | 0.5674 | 2.9772 | 0.6275 | 0.6250 | 0.2112 | 0.1598 |
| 0.1799 | 90.0 | 630 | 1.7480 | 0.6275 | 0.5675 | 2.9770 | 0.6275 | 0.6250 | 0.2112 | 0.1598 |
| 0.1799 | 91.0 | 637 | 1.7483 | 0.6275 | 0.5675 | 2.9770 | 0.6275 | 0.6250 | 0.2112 | 0.1599 |
| 0.1799 | 92.0 | 644 | 1.7485 | 0.6275 | 0.5676 | 2.9769 | 0.6275 | 0.6250 | 0.2112 | 0.1598 |
| 0.1799 | 93.0 | 651 | 1.7486 | 0.6275 | 0.5676 | 2.9770 | 0.6275 | 0.6250 | 0.2112 | 0.1598 |
| 0.1799 | 94.0 | 658 | 1.7488 | 0.6275 | 0.5676 | 2.9770 | 0.6275 | 0.6250 | 0.2131 | 0.1598 |
| 0.1799 | 95.0 | 665 | 1.7489 | 0.6275 | 0.5676 | 2.9768 | 0.6275 | 0.6250 | 0.2143 | 0.1598 |
| 0.1799 | 96.0 | 672 | 1.7491 | 0.6275 | 0.5676 | 2.9768 | 0.6275 | 0.6250 | 0.2161 | 0.1599 |
| 0.1799 | 97.0 | 679 | 1.7492 | 0.6275 | 0.5676 | 2.9768 | 0.6275 | 0.6250 | 0.2161 | 0.1599 |
| 0.1799 | 98.0 | 686 | 1.7493 | 0.6275 | 0.5677 | 2.9768 | 0.6275 | 0.6250 | 0.2161 | 0.1599 |
| 0.1799 | 99.0 | 693 | 1.7493 | 0.6275 | 0.5677 | 2.9769 | 0.6275 | 0.6250 | 0.2161 | 0.1599 |
| 0.1799 | 100.0 | 700 | 1.7493 | 0.6275 | 0.5677 | 2.9769 | 0.6275 | 0.6250 | 0.2161 | 0.1599 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
NasimB/gpt2-cocnat-mod-datasets3-rarity-all
|
NasimB
| 2023-07-11T13:13:07Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T11:20:45Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-cocnat-mod-datasets3-rarity-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-cocnat-mod-datasets3-rarity-all
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7201 | 0.3 | 500 | 5.6554 |
| 5.3777 | 0.6 | 1000 | 5.2100 |
| 5.0257 | 0.91 | 1500 | 4.9662 |
| 4.7428 | 1.21 | 2000 | 4.8246 |
| 4.5916 | 1.51 | 2500 | 4.6972 |
| 4.4886 | 1.81 | 3000 | 4.5927 |
| 4.3213 | 2.12 | 3500 | 4.5355 |
| 4.173 | 2.42 | 4000 | 4.4840 |
| 4.1402 | 2.72 | 4500 | 4.4195 |
| 4.0833 | 3.02 | 5000 | 4.3844 |
| 3.8496 | 3.33 | 5500 | 4.3743 |
| 3.8398 | 3.63 | 6000 | 4.3421 |
| 3.8193 | 3.93 | 6500 | 4.3113 |
| 3.6103 | 4.23 | 7000 | 4.3294 |
| 3.5592 | 4.53 | 7500 | 4.3199 |
| 3.5442 | 4.84 | 8000 | 4.3041 |
| 3.4575 | 5.14 | 8500 | 4.3158 |
| 3.3572 | 5.44 | 9000 | 4.3191 |
| 3.3595 | 5.74 | 9500 | 4.3171 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
Bhanu9Prakash/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
Bhanu9Prakash
| 2023-07-11T13:05:14Z | 222 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:MIT/ast-finetuned-audioset-10-10-0.4593",
"base_model:finetune:MIT/ast-finetuned-audioset-10-10-0.4593",
"license:bsd-3-clause",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-11T12:44:34Z |
---
license: bsd-3-clause
base_model: MIT/ast-finetuned-audioset-10-10-0.4593
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.92
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Accuracy: 0.92
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0687 | 1.0 | 113 | 0.6197 | 0.84 |
| 0.299 | 2.0 | 226 | 0.5065 | 0.86 |
| 0.2634 | 3.0 | 339 | 0.5042 | 0.88 |
| 0.0473 | 4.0 | 452 | 0.5413 | 0.88 |
| 0.0033 | 5.0 | 565 | 0.3706 | 0.91 |
| 0.0003 | 6.0 | 678 | 0.4485 | 0.9 |
| 0.2538 | 7.0 | 791 | 0.4006 | 0.9 |
| 0.0002 | 8.0 | 904 | 0.3985 | 0.9 |
| 0.003 | 9.0 | 1017 | 0.3952 | 0.91 |
| 0.0001 | 10.0 | 1130 | 0.3966 | 0.92 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
kevynswhants/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
|
kevynswhants
| 2023-07-11T12:59:59Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-11T12:17:27Z |
---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4704
- Accuracy: 0.9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5809 | 1.0 | 112 | 0.5176 | 0.86 |
| 0.349 | 2.0 | 225 | 0.5205 | 0.84 |
| 0.4557 | 3.0 | 337 | 0.6501 | 0.8 |
| 0.4334 | 4.0 | 450 | 0.4594 | 0.87 |
| 0.2882 | 5.0 | 562 | 0.4685 | 0.9 |
| 0.0033 | 6.0 | 675 | 0.5052 | 0.89 |
| 0.0013 | 7.0 | 787 | 0.5655 | 0.91 |
| 0.006 | 8.0 | 900 | 0.7625 | 0.86 |
| 0.2319 | 9.0 | 1012 | 0.5293 | 0.91 |
| 0.0002 | 10.0 | 1125 | 0.4704 | 0.9 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
ashnrk/textual_inversion_residential
|
ashnrk
| 2023-07-11T12:59:49Z | 18 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T11:57:37Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_residential
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
sumitrsch/multiconer2_muril_large_bn
|
sumitrsch
| 2023-07-11T12:41:30Z | 109 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-02-02T13:05:45Z |
For prediction on test data use this link. https://colab.research.google.com/drive/1K-ED0yvMsdciNo52rluauQBEAg-DBomC?usp=sharing
update best_model_path = "sumitrsch/multiconer2_muril_large_bn"
If you are using this code, cite paper "silp_nlp at SemEval-2023 Task 2: Cross-lingual Knowledge Transfer for Mono-lingual Learning"
https://aclanthology.org/2023.semeval-1.164
|
digiplay/NewMarsMix_R11
|
digiplay
| 2023-07-11T12:33:05Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2023-07-11T00:59:45Z |
---
license: other
---
Models info :
https://civitai.com/models/19321/newmarsmix

|
Tritanium/VG-loras
|
Tritanium
| 2023-07-11T12:29:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-11T12:28:36Z |
this is a repo where its all anime video game character LoRAs. i didn bother sorting them to make the git clone easier lmao
|
srirammadduri-ts/roberta-base-squad2-finetuned-roberta
|
srirammadduri-ts
| 2023-07-11T12:26:34Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T12:06:31Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-squad2-finetuned-roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2-finetuned-roberta
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 0.0008 |
| No log | 2.0 | 4 | 0.0004 |
| No log | 3.0 | 6 | 0.0003 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nolanspecter/Reinforce-Cart-Pole
|
nolanspecter
| 2023-07-11T12:17:32Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T12:16:48Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cart-Pole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Bhanu9Prakash/distilhubert-finetuned-gtzan
|
Bhanu9Prakash
| 2023-07-11T12:11:24Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2023-07-11T10:39:13Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4719
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9857 | 1.0 | 113 | 1.7744 | 0.55 |
| 1.2769 | 2.0 | 226 | 1.1405 | 0.71 |
| 1.0336 | 3.0 | 339 | 0.8697 | 0.75 |
| 0.9106 | 4.0 | 452 | 0.8546 | 0.72 |
| 0.5839 | 5.0 | 565 | 0.5701 | 0.86 |
| 0.3163 | 6.0 | 678 | 0.5471 | 0.8 |
| 0.3682 | 7.0 | 791 | 0.4865 | 0.83 |
| 0.1245 | 8.0 | 904 | 0.4407 | 0.88 |
| 0.1412 | 9.0 | 1017 | 0.4737 | 0.84 |
| 0.1531 | 10.0 | 1130 | 0.4719 | 0.85 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
F-Haru/test
|
F-Haru
| 2023-07-11T12:10:12Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T12:03:50Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1686 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NasimB/gpt2-concat-all-mod-datasets1-rarity-all-c13k-c2p6k-rev
|
NasimB
| 2023-07-11T12:08:38Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T10:23:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-mod-datasets1-rarity-all-c13k-c2p6k-rev
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-mod-datasets1-rarity-all-c13k-c2p6k-rev
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7952 | 0.32 | 500 | 5.7564 |
| 5.4839 | 0.63 | 1000 | 5.3858 |
| 5.1442 | 0.95 | 1500 | 5.2129 |
| 4.8614 | 1.27 | 2000 | 5.0996 |
| 4.7476 | 1.59 | 2500 | 5.0222 |
| 4.6481 | 1.9 | 3000 | 4.9507 |
| 4.4504 | 2.22 | 3500 | 4.9514 |
| 4.3796 | 2.54 | 4000 | 4.8926 |
| 4.3388 | 2.85 | 4500 | 4.8553 |
| 4.1754 | 3.17 | 5000 | 4.8697 |
| 4.0695 | 3.49 | 5500 | 4.8398 |
| 4.0541 | 3.8 | 6000 | 4.8012 |
| 3.9275 | 4.12 | 6500 | 4.8334 |
| 3.7741 | 4.44 | 7000 | 4.8258 |
| 3.7686 | 4.76 | 7500 | 4.8182 |
| 3.7113 | 5.07 | 8000 | 4.8313 |
| 3.5725 | 5.39 | 8500 | 4.8351 |
| 3.5762 | 5.71 | 9000 | 4.8371 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
ootes/wwtb
|
ootes
| 2023-07-11T11:52:34Z | 0 | 1 | null |
[
"arxiv:2211.09800",
"region:us"
] | null | 2023-07-04T10:03:04Z |
# InstructPix2Pix: Learning to Follow Image Editing Instructions
### [Project Page](https://www.timothybrooks.com/instruct-pix2pix/) | [Paper](https://arxiv.org/abs/2211.09800) | [Data](http://instruct-pix2pix.eecs.berkeley.edu/)
PyTorch implementation of InstructPix2Pix, an instruction-based image editing model, based on the original [CompVis/stable_diffusion](https://github.com/CompVis/stable-diffusion) repo. <br>
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://www.timothybrooks.com/instruct-pix2pix/)
[Tim Brooks](https://www.timothybrooks.com/)\*,
[Aleksander Holynski](https://holynski.org/)\*,
[Alexei A. Efros](https://people.eecs.berkeley.edu/~efros/) <br>
UC Berkeley <br>
\*denotes equal contribution
<img src='https://instruct-pix2pix.timothybrooks.com/teaser.jpg'/>
## TL;DR: quickstart
Follow the instructions below to download and run InstructPix2Pix on your own images. These instructions have been tested on a GPU with >18GB VRAM. If you don't have a GPU, you may need to change the default configuration, or check out [other ways of using the model](https://github.com/timothybrooks/instruct-pix2pix#other-ways-of-using-instructpix2pix).
### Set up a conda environment, and download a pretrained model:
```
conda env create -f environment.yaml
conda activate ip2p
bash scripts/download_checkpoints.sh
```
### Edit a single image:
```
python edit_cli.py --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
# Optionally, you can specify parameters to tune your result:
# python edit_cli.py --steps 100 --resolution 512 --seed 1371 --cfg-text 7.5 --cfg-image 1.2 --input imgs/example.jpg --output imgs/output.jpg --edit "turn him into a cyborg"
```
### Or launch your own interactive editing Gradio app:
```
python edit_app.py
```

_(For advice on how to get the best results by tuning parameters, see the [Tips](https://github.com/timothybrooks/instruct-pix2pix#tips) section)._
## Setup
Install all dependencies with:
```
conda env create -f environment.yaml
```
Download the pretrained models by running:
```
bash scripts/download_checkpoints.sh
```
## Generated Dataset
Our image editing model is trained on a generated dataset consisting of 454,445 examples. Each example contains (1) an input image, (2) an editing instruction, and (3) an output edited image. We provide two versions of the dataset, one in which each pair of edited images is generated 100 times, and the best examples are chosen based on CLIP metrics (Section 3.1.2 in the paper) (`clip-filtered-dataset`), and one in which examples are randomly chosen (`random-sample-dataset`).
For the released version of this dataset, we've additionally filtered prompts and images for NSFW content. After NSFW filtering, the GPT-3 generated dataset contains 451,990 examples. The final image-pair datasets contain:
| | # of image editing examples | Dataset size |
|--|-----------------------|----------------------- |
| `random-sample-dataset` |451990|727GB|
| `clip-filtered-dataset` |313010|436GB|
To download one of these datasets, along with the entire NSFW-filtered text data, run the following command with the appropriate dataset name:
```
bash scripts/download_data.sh clip-filtered-dataset
```
## Training InstructPix2Pix
InstructPix2Pix is trained by fine-tuning from an initial StableDiffusion checkpoint. The first step is to download a Stable Diffusion checkpoint. For our trained models, we used the v1.5 checkpoint as the starting point. To download the same ones we used, you can run the following script:
```
bash scripts/download_pretrained_sd.sh
```
If you'd like to use a different checkpoint, point to it in the config file `configs/train.yaml`, on line 8, after `ckpt_path:`.
Next, we need to change the config to point to our downloaded (or generated) dataset. If you're using the `clip-filtered-dataset` from above, you can skip this. Otherwise, you may need to edit lines 85 and 94 of the config (`data.params.train.params.path`, `data.params.validation.params.path`).
Finally, start a training job with the following command:
```
python main.py --name default --base configs/train.yaml --train --gpus 0,1,2,3,4,5,6,7
```
## Creating your own dataset
Our generated dataset of paired images and editing instructions is made in two phases: First, we use GPT-3 to generate text triplets: (a) a caption describing an image, (b) an edit instruction, (c) a caption describing the image after the edit. Then, we turn pairs of captions (before/after the edit) into pairs of images using Stable Diffusion and Prompt-to-Prompt.
### (1) Generate a dataset of captions and instructions
We provide our generated dataset of captions and edit instructions [here](https://instruct-pix2pix.eecs.berkeley.edu/gpt-generated-prompts.jsonl). If you plan to use our captions+instructions, skip to step (2). Otherwise, if you would like to create your own text dataset, please follow steps (1.1-1.3) below. Note that generating very large datasets using GPT-3 can be expensive.
#### (1.1) Manually write a dataset of instructions and captions
The first step of the process is fine-tuning GPT-3. To do this, we made a dataset of 700 examples broadly covering of edits that we might want our model to be able to perform. Our examples are available [here](https://instruct-pix2pix.eecs.berkeley.edu/human-written-prompts.jsonl). These should be diverse and cover a wide range of possible captions and types of edits. Ideally, they should avoid duplication or significant overlap of captions and instructions. It is also important to be mindful of limitations of Stable Diffusion and Prompt-to-Prompt in writing these examples, such as inability to perform large spatial transformations (e.g., moving the camera, zooming in, swapping object locations).
Input prompts should closely match the distribution of input prompts used to generate the larger dataset. We sampled the 700 input prompts from the _LAION Improved Aesthetics 6.5+_ dataset and also use this dataset for generating examples. We found this dataset is quite noisy (many of the captions are overly long and contain irrelevant text). For this reason, we also considered MSCOCO and LAION-COCO datasets, but ultimately chose _LAION Improved Aesthetics 6.5+_ due to its diversity of content, proper nouns, and artistic mediums. If you choose to use another dataset or combination of datasets as input to GPT-3 when generating examples, we recommend you sample the input prompts from the same distribution when manually writing training examples.
#### (1.2) Finetune GPT-3
The next step is to finetune a large language model on the manually written instructions/outputs to generate edit instructions and edited caption from a new input caption. For this, we finetune GPT-3's Davinci model via the OpenAI API, although other language models could be used.
To prepare training data for GPT-3, one must first create an OpenAI developer account to access the needed APIs, and [set up the API keys on your local device](https://beta.openai.com/docs/api-reference/introduction). Also, run the `prompts/prepare_for_gpt.py` script, which forms the prompts into the correct format by concatenating instructions and captions and adding delimiters and stop sequences.
```bash
python dataset_creation/prepare_for_gpt.py --input-path data/human-written-prompts.jsonl --output-path data/human-written-prompts-for-gpt.jsonl
```
Next, finetune GPT-3 via the OpenAI CLI. We provide an example below, although please refer to OpenAI's official documentation for this, as best practices may change. We trained the Davinci model for a single epoch. You can experiment with smaller less expensive GPT-3 variants or with open source language models, although this may negatively affect performance.
```bash
openai api fine_tunes.create -t data/human-written-prompts-for-gpt.jsonl -m davinci --n_epochs 1 --suffix "instruct-pix2pix"
```
You can test out the finetuned GPT-3 model by launching the provided Gradio app:
```bash
python prompt_app.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
```

#### (1.3) Generate a large dataset of captions and instructions
We now use the finetuned GPT-3 model to generate a large dataset. Our dataset cost thousands of dollars to create. See `prompts/gen_instructions_and_captions.py` for the script which generates these examples. We recommend first generating a small number of examples (by setting a low value of `--num-samples`) and gradually increasing the scale to ensure the results are working as desired before increasing scale.
```bash
python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME
```
If you are generating at a very large scale (e.g., 100K+), it will be noteably faster to generate the dataset with multiple processes running in parallel. This can be accomplished by setting `--partitions=N` to a higher number and running multiple processes, setting each `--partition` to the corresponding value.
```bash
python dataset_creation/generate_txt_dataset.py --openai-api-key OPENAI_KEY --openai-model OPENAI_MODEL_NAME --partitions=10 --partition=0
```
### (2) Turn paired captions into paired images
The next step is to turn pairs of text captions into pairs of images. For this, we need to copy some pre-trained Stable Diffusion checkpoints to `stable_diffusion/models/ldm/stable-diffusion-v1/`. You may have already done this if you followed the instructions above for training with our provided data, but if not, you can do this by running:
```bash
bash scripts/download_pretrained_sd.sh
```
For our model, we used [checkpoint v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned.ckpt), and the [new autoencoder](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/resolve/main/vae-ft-mse-840000-ema-pruned.ckpt), but other models may work as well. If you choose to use other models, make sure to change point to the corresponding checkpoints by passing in the `--ckpt` and `--vae-ckpt` arguments. Once all checkpoints have been downloaded, we can generate the dataset with the following command:
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl
```
This command operates on a single GPU (typically a V100 or A100). To parallelize over many GPUs/machines, set `--n-partitions` to the total number of parallel jobs and `--partition` to the index of each job.
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-partitions 100 --partition 0
```
The default parameters match that of our dataset, although in practice you can use a smaller number of steps (e.g., `--steps=25`) to generate high quality data faster. By default, we generate 100 samples per prompt and use CLIP filtering to keep a max of 4 per prompt. You can experiment with fewer samples by setting `--n-samples`. The command below turns off CLIP filtering entirely and is therefore faster:
```
python dataset_creation/generate_img_dataset.py --out_dir data/instruct-pix2pix-dataset-000 --prompts_file path/to/generated_prompts.jsonl --n-samples 4 --clip-threshold 0 --clip-dir-threshold 0 --clip-img-threshold 0 --n-partitions 100 --partition 0
```
After generating all of the dataset examples, run the following command below to create a list of the examples. This is needed for the dataset onject to efficiently be able to sample examples without needing to iterate over the entire dataset directory at the start of each training run.
```
python dataset_creation/prepare_dataset.py data/instruct-pix2pix-dataset-000
```
## Evaluation
To generate plots like the ones in Figures 8 and 10 in the paper, run the following command:
```
python metrics/compute_metrics.py --ckpt /path/to/your/model.ckpt
```
## Tips
If you're not getting the quality result you want, there may be a few reasons:
1. **Is the image not changing enough?** Your Image CFG weight may be too high. This value dictates how similar the output should be to the input. It's possible your edit requires larger changes from the original image, and your Image CFG weight isn't allowing that. Alternatively, your Text CFG weight may be too low. This value dictates how much to listen to the text instruction. The default Image CFG of 1.5 and Text CFG of 7.5 are a good starting point, but aren't necessarily optimal for each edit. Try:
* Decreasing the Image CFG weight, or
* Increasing the Text CFG weight, or
2. Conversely, **is the image changing too much**, such that the details in the original image aren't preserved? Try:
* Increasing the Image CFG weight, or
* Decreasing the Text CFG weight
3. Try generating results with different random seeds by setting "Randomize Seed" and running generation multiple times. You can also try setting "Randomize CFG" to sample new Text CFG and Image CFG values each time.
4. Rephrasing the instruction sometimes improves results (e.g., "turn him into a dog" vs. "make him a dog" vs. "as a dog").
5. Increasing the number of steps sometimes improves results.
6. Do faces look weird? The Stable Diffusion autoencoder has a hard time with faces that are small in the image. Try cropping the image so the face takes up a larger portion of the frame.
## Comments
- Our codebase is based on the [Stable Diffusion codebase](https://github.com/CompVis/stable-diffusion).
## BibTeX
```
@article{brooks2022instructpix2pix,
title={InstructPix2Pix: Learning to Follow Image Editing Instructions},
author={Brooks, Tim and Holynski, Aleksander and Efros, Alexei A},
journal={arXiv preprint arXiv:2211.09800},
year={2022}
}
```
## Other ways of using InstructPix2Pix
### InstructPix2Pix on [HuggingFace](https://huggingface.co/spaces/timbrooks/instruct-pix2pix):
> A browser-based version of the demo is available as a [HuggingFace space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix). For this version, you only need a browser, a picture you want to edit, and an instruction! Note that this is a shared online demo, and processing time may be slower during peak utilization.
### InstructPix2Pix on [Replicate](https://replicate.com/timothybrooks/instruct-pix2pix):
> Replicate provides a production-ready cloud API for running the InstructPix2Pix model. You can run the model from any environment using a simple API call with cURL, Python, JavaScript, or your language of choice. Replicate also provides a web interface for running the model and sharing predictions.
### InstructPix2Pix in [Imaginairy](https://github.com/brycedrennan/imaginAIry#-edit-images-with-instructions-alone-by-instructpix2pix):
> Imaginairy offers another way of easily installing InstructPix2Pix with a single command. It can run on devices without GPUs (like a Macbook!).
> ```bash
> pip install imaginairy --upgrade
> aimg edit any-image.jpg --gif "turn him into a cyborg"
> ```
> It also offers an easy way to perform a bunch of edits on an image, and can save edits out to an animated GIF:
> ```
> aimg edit --gif --surprise-me pearl-earring.jpg
> ```
> <img src="https://raw.githubusercontent.com/brycedrennan/imaginAIry/7c05c3aae2740278978c5e84962b826e58201bac/assets/girl_with_a_pearl_earring_suprise.gif" width="512">
### InstructPix2Pix in [🧨 Diffusers](https://github.com/huggingface/diffusers):
> InstructPix2Pix in Diffusers is a bit more optimized, so it may be faster and more suitable for GPUs with less memory. Below are instructions for installing the library and editing an image:
> 1. Install diffusers and relevant dependencies:
>
> ```bash
> pip install transformers accelerate torch
>
> pip install git+https://github.com/huggingface/diffusers.git
> ```
>
> 2. Load the model and edit the image:
>
> ```python
>
> import torch
> from diffusers import StableDiffusionInstructPix2PixPipeline, EulerAncestralDiscreteScheduler
>
> model_id = "timbrooks/instruct-pix2pix"
> pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16, safety_checker=None)
> pipe.to("cuda")
> pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
> # `image` is an RGB PIL.Image
> images = pipe("turn him into cyborg", image=image).images
> images[0]
> ```
>
> For more information, check the docs [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix).
|
jphme/orca_mini_v2_ger_7b-GGML
|
jphme
| 2023-07-11T11:50:45Z | 0 | 1 |
transformers
|
[
"transformers",
"text-generation",
"de",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
text-generation
| 2023-07-11T10:39:52Z |
---
inference: false
license: cc-by-nc-sa-4.0
language:
- de
- en
library_name: transformers
pipeline_tag: text-generation
---
# Orca Mini v2 German 7b GGML
These files are GGML format model files for [Orca Mini v2 German 7b](https://huggingface.co/jphme/orca_mini_v2_ger_7b). Please find all information about the model in the original repository.
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Prompt template:
```
### System:
You are an AI assistant that follows instruction extremely well. Help as much as you can.
### User:
prompt
### Response:
```
## Compatibility
### `q4_0`
So far, I only quantized a `q4_0` version for my own use. Please let me know if there is demand for other quantizations.
These should be compatbile with any UIs, tools and libraries released since late May.
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| orca-mini-v2-ger-7b.ggmlv3.q4_0.bin | q4_0 | 4 | 3.83 GB | ~6.3 GB | Original llama.cpp quant method, 4-bit. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m orca-mini-v2-ger-7b.ggmlv3.q4_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are an story writing assistant who writes very long, detailed and interesting stories\n\n### User:\nWrite a story about llamas\n\n### Response:\n"
```
If you're able to use full GPU offloading, you should use `-t 1` to get best performance.
If not able to fully offload to GPU, you should use more cores. Change `-t 10` to the number of physical CPU cores you have, or a lower number depending on what gives best performance.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
## Thanks
Special thanks to [Pankaj Mathur](https://huggingface.co/psmathur) for the great Orca Mini base model and [TheBloke](https://huggingface.co/TheBloke) for his great work quantizing billions of models (and for his template for this README).
|
sofia-todeschini/PubMedBERT-LitCovid-v1.2
|
sofia-todeschini
| 2023-07-11T11:50:04Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T08:51:46Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: PubMedBERT-LitCovid-v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PubMedBERT-LitCovid-v1.2
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0998
- F1: 0.9200
- Roc Auc: 0.9529
- Accuracy: 0.7868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:|
| 0.1017 | 1.0 | 2211 | 0.0897 | 0.9155 | 0.9492 | 0.7722 |
| 0.0742 | 2.0 | 4422 | 0.0868 | 0.9177 | 0.9508 | 0.7778 |
| 0.0559 | 3.0 | 6633 | 0.0903 | 0.9191 | 0.9521 | 0.7827 |
| 0.0396 | 4.0 | 8844 | 0.0955 | 0.9184 | 0.9512 | 0.7814 |
| 0.0282 | 5.0 | 11055 | 0.0998 | 0.9200 | 0.9529 | 0.7868 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
vineetsharma/ppo-LunarLander-v2
|
vineetsharma
| 2023-07-11T11:35:07Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T11:34:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 281.79 +/- 14.76
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Penbloom/Penbloom_semirealmix
|
Penbloom
| 2023-07-11T11:34:24Z | 0 | 0 | null |
[
"musclar",
"korean",
"license:openrail",
"region:us"
] | null | 2023-04-08T14:59:05Z |
---
license: openrail
tags:
- musclar
- korean
---
## Model Detail & Merge Recipes
Penbloom_semirealmix aims to create musclar girls with nice skin texture and detailed clothes.This is a ``merge`` model.
## Source model
[Civitai:Beenyou|Stable Diffusion Checkpoint](https://civitai.com/models/27688/beenyou)
[⚠NSFW][Civitai:饭特稀|Stable Diffusion Checkpoint](https://civitai.com/models/18427/v08))
### Penbloom_semirealmix_v1.0
|
chandrasutrisnotjhong/ppo-Huggy
|
chandrasutrisnotjhong
| 2023-07-11T11:32:07Z | 15 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-11T11:32:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: chandrasutrisnotjhong/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
vvasanth/falcon7b-finetune-test-220623_1
|
vvasanth
| 2023-07-11T11:31:41Z | 0 | 0 | null |
[
"text-generation",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2023-07-04T11:51:13Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
|
jasonyim2/distilbert-base-uncased-finetuned-emotion
|
jasonyim2
| 2023-07-11T11:22:55Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-26T06:45:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215386837894378
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2227
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8265 | 1.0 | 250 | 0.3204 | 0.9 | 0.8963 |
| 0.2534 | 2.0 | 500 | 0.2227 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
antonioalvarado/text_analyzer_base_bert
|
antonioalvarado
| 2023-07-11T11:21:46Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T10:55:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: text_analyzer_base_bert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_analyzer_base_bert
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0472
- Accuracy: 0.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3672 | 1.0 | 1728 | 0.1788 | 0.9469 |
| 0.1509 | 2.0 | 3456 | 0.1311 | 0.9769 |
| 0.0071 | 3.0 | 5184 | 0.0494 | 0.9861 |
| 0.0076 | 4.0 | 6912 | 0.0472 | 0.9861 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.12.0+cu102
- Datasets 2.13.1
- Tokenizers 0.13.3
|
openwaifu/SoVits-VC-Chtholly-Nota-Seniorious-0.1
|
openwaifu
| 2023-07-11T11:19:42Z | 1 | 0 |
transformers
|
[
"transformers",
"anime",
"audio",
"tts",
"voice conversion",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-04-17T12:07:10Z |
---
license: mit
tags:
- anime
- audio
- tts
- voice conversion
---
Origin (Generated From TTS):
<audio controls src="https://s3.amazonaws.com/moonup/production/uploads/62d3a59dc72c791b23918293/neVwV9PEc0gGylrEup2Kn.wav"></audio>
Converted (Using SoVits Chtholly-VC)
<audio controls src="https://s3.amazonaws.com/moonup/production/uploads/62d3a59dc72c791b23918293/oKNg3kVgAb7utyGCZa8f9.wav"></audio>
|
1aurent/CartPole-v1
|
1aurent
| 2023-07-11T11:15:03Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T10:42:02Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 498.08 +/- 19.05
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
digiplay/BasilKorea_v2
|
digiplay
| 2023-07-11T11:00:49Z | 315 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T10:27:11Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
|
foxxy-hm/mt5-small-finetuned-wikilingua-en-vi
|
foxxy-hm
| 2023-07-11T10:54:41Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-07-10T18:08:57Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: foxxy-hm/mt5-small-finetuned-wikilingua-en-vi
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# foxxy-hm/mt5-small-finetuned-wikilingua-en-vi
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0995
- Validation Loss: 1.7594
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 16208, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.4772 | 2.1093 | 0 |
| 2.8808 | 1.9781 | 1 |
| 2.5121 | 1.8920 | 2 |
| 2.3034 | 1.8253 | 3 |
| 2.1755 | 1.7736 | 4 |
| 2.1104 | 1.7594 | 5 |
| 2.0990 | 1.7594 | 6 |
| 2.0995 | 1.7594 | 7 |
### Framework versions
- Transformers 4.27.2
- TensorFlow 2.12.0
- Datasets 2.11.0
- Tokenizers 0.13.3
|
nickw9/ppo-LunarLander-v2
|
nickw9
| 2023-07-11T10:48:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T10:48:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.15 +/- 10.89
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Winmodel/ML-Agents-Pyramids
|
Winmodel
| 2023-07-11T10:36:07Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2023-07-11T10:36:05Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ML-Agents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mort1k/q-FrozenLake-v1-4x4-noSlippery
|
mort1k
| 2023-07-11T10:35:42Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T10:35:41Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mort1k/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
F-Haru/paraphrase-mpnet-base-v2_09-04-MarginMSELoss-finetuning-7-5
|
F-Haru
| 2023-07-11T10:29:25Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-07-11T09:35:14Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
ファインチューニングする時のNegative ja-en, en-jaのコサイン類似度が0.9以上0.4以下のみで
ファインチューニングをした後に、
教師モデルをparaphrase-mpnet-base-v2で知識蒸留をしたモデル
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1686 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
bofenghuang/vigogne-7b-instruct
|
bofenghuang
| 2023-07-11T10:18:13Z | 1,493 | 23 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"fr",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-03-22T21:36:45Z |
---
license: openrail
language:
- fr
pipeline_tag: text-generation
library_name: transformers
tags:
- llama
- LLM
inference: false
---
<p align="center" width="100%">
<img src="https://huggingface.co/bofenghuang/vigogne-7b-instruct/resolve/main/vigogne_logo.png" alt="Vigogne" style="width: 40%; min-width: 300px; display: block; margin: auto;">
</p>
# Vigogne-7B-Instruct: A French Instruction-following LLaMA Model
Vigogne-7B-Instruct is a LLaMA-7B model fine-tuned to follow the French instructions.
For more information, please visit the Github repo: https://github.com/bofenghuang/vigogne
**Usage and License Notices**: Same as [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca), Vigogne is intended and licensed for research use only. The dataset is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
## Changelog
All versions are available in branches.
- **V1.0**: Initial release, trained on the translated Stanford Alpaca dataset.
- **V1.1**: Improved translation quality of the Stanford Alpaca dataset.
- **V2.0**: Expanded training dataset to 224k for better performance.
- **V3.0**: Further expanded training dataset to 262k for improved results.
## Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from vigogne.preprocess import generate_instruct_prompt
model_name_or_path = "bofenghuang/vigogne-7b-instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, padding_side="right", use_fast=False)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16, device_map="auto")
user_query = "Expliquez la différence entre DoS et phishing."
prompt = generate_instruct_prompt(user_query)
input_ids = tokenizer(prompt, return_tensors="pt")["input_ids"].to(model.device)
input_length = input_ids.shape[1]
generated_outputs = model.generate(
input_ids=input_ids,
generation_config=GenerationConfig(
temperature=0.1,
do_sample=True,
repetition_penalty=1.0,
max_new_tokens=512,
),
return_dict_in_generate=True,
)
generated_tokens = generated_outputs.sequences[0, input_length:]
generated_text = tokenizer.decode(generated_tokens, skip_special_tokens=True)
print(generated_text)
```
You can also infer this model by using the following Google Colab Notebook.
<a href="https://colab.research.google.com/github/bofenghuang/vigogne/blob/main/notebooks/infer_instruct.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Limitations
Vigogne is still under development, and there are many limitations that have to be addressed. Please note that it is possible that the model generates harmful or biased content, incorrect information or generally unhelpful answers.
|
ivivnov/ppo-LunarLander-v2
|
ivivnov
| 2023-07-11T09:56:04Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T09:55:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.61 +/- 15.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ashnrk/textual_inversion_industrial
|
ashnrk
| 2023-07-11T09:52:03Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-07-11T08:49:45Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - ashnrk/textual_inversion_industrial
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
Winmodel/ML-Agents-SnowballTarget
|
Winmodel
| 2023-07-11T09:47:03Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-07-11T09:47:02Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ML-Agents-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dsfsi/ss-en-m2m100-gov
|
dsfsi
| 2023-07-11T09:39:30Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"m2m_100",
"text2text-generation",
"m2m100",
"translation",
"africanlp",
"african",
"siswati",
"ss",
"en",
"arxiv:2303.03750",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-22T08:46:05Z |
---
license: cc-by-4.0
language:
- ss
- en
pipeline_tag: text2text-generation
tags:
- m2m100
- translation
- africanlp
- african
- siswati
---
# [ss-en] Siswati to English Translation Model based on M2M100 and The South African Gov-ZA multilingual corpus
Model created from Siswati to English aligned sentences from [The South African Gov-ZA multilingual corpus](https://github.com/dsfsi/gov-za-multilingual)
The data set contains cabinet statements from the South African government, maintained by the Government Communication and Information System (GCIS). Data was scraped from the governments website: https://www.gov.za/cabinet-statements
## Authors
- Vukosi Marivate - [@vukosi](https://twitter.com/vukosi)
- Matimba Shingange
- Richard Lastrucci
- Isheanesu Joseph Dzingirai
- Jenalea Rajab
## BibTeX entry and citation info
```
@inproceedings{lastrucci-etal-2023-preparing,
title = "Preparing the Vuk{'}uzenzele and {ZA}-gov-multilingual {S}outh {A}frican multilingual corpora",
author = "Richard Lastrucci and Isheanesu Dzingirai and Jenalea Rajab and Andani Madodonga and Matimba Shingange and Daniel Njini and Vukosi Marivate",
booktitle = "Proceedings of the Fourth workshop on Resources for African Indigenous Languages (RAIL 2023)",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.rail-1.3",
pages = "18--25"
}
```
[Paper - Preparing the Vuk'uzenzele and ZA-gov-multilingual South African multilingual corpora](https://arxiv.org/abs/2303.03750)
|
subandwho/trial3
|
subandwho
| 2023-07-11T09:27:02Z | 3 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T09:26:58Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
sjdata/vit-base-beans
|
sjdata
| 2023-07-11T09:16:28Z | 189 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T08:43:38Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0138
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1595 | 1.54 | 100 | 0.1212 | 0.9699 |
| 0.014 | 3.08 | 200 | 0.0138 | 1.0 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
erberry/Ziya-LLaMA-13B-v1.1-merged
|
erberry
| 2023-07-11T08:57:33Z | 11 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"zh",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-05T09:45:09Z |
---
language:
- zh
---
引用自[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
> 由于LLaMA权重的许可限制,该模型不能用于商业用途,请严格遵守LLaMA的使用政策。
考虑到LLaMA权重的许可证限制,我们无法直接发布完整的模型权重。
因此,我们使用了FastChat开源工具作为基础,并对其进行了进一步的优化。
我们计算并发布了Ziya-LLaMA-13B-v1权重与原始LLaMA权重之间的差值。
合并方法见如下 ipynb:
[ipynb](https://github.com/erberry/ThinkML/blob/main/ziya_llama_merge.ipynb)
其中 ggml-model-q5-1.bin 为经过 llama.cpp 量化后的模型。
**License** Non-commercial bespoke license
|
Bluishoul/grimoire-model
|
Bluishoul
| 2023-07-11T08:55:00Z | 0 | 0 |
transformers
|
[
"transformers",
"text-classification",
"dataset:Open-Orca/OpenOrca",
"doi:10.57967/hf/0873",
"license:openrail",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-30T02:45:26Z |
---
license: openrail
pipeline_tag: text-classification
library_name: transformers
datasets:
- Open-Orca/OpenOrca
---
|
Winmodel/Pixelcopter-PLE-v0
|
Winmodel
| 2023-07-11T08:54:35Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T08:52:52Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 47.50 +/- 23.78
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
zhundred/SpaceInvadersNoFrameskip-v4
|
zhundred
| 2023-07-11T08:52:32Z | 9 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T08:52:02Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 415.00 +/- 187.28
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhundred -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga zhundred -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga zhundred
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
vidric/bertopic_after_tuning
|
vidric
| 2023-07-11T08:51:20Z | 4 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2023-07-11T08:50:48Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# bertopic_after_tuning
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("vidric/bertopic_after_tuning")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 68
* Number of training documents: 20977
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | parfum - wangi - tidak - banget - tapi | 11 | -1_parfum_wangi_tidak_banget |
| 0 | wangi - mantap - enak - banget - suka | 6775 | 0_wangi_mantap_enak_banget |
| 1 | orgasm - eos - aku - lebih - farhampton | 10606 | 1_orgasm_eos_aku_lebih |
| 2 | terbaik - selalu - always - deal - puas | 886 | 2_terbaik_selalu_always_deal |
| 3 | admin - ramah - respon - cepat - kirim | 191 | 3_admin_ramah_respon_cepat |
| 4 | bubble - wrap - tebal - aman - kotak | 187 | 4_bubble_wrap_tebal_aman |
| 5 | alpha - theta - beli - tim - tetha | 154 | 5_alpha_theta_beli_tim |
| 6 | beli - menyesal - kali - lagi - ragu | 125 | 6_beli_menyesal_kali_lagi |
| 7 | kedua - beli - dua - kali - 2x | 109 | 7_kedua_beli_dua_kali |
| 8 | pernah - kecewa - tidak - selalu - hmns | 98 | 8_pernah_kecewa_tidak_selalu |
| 9 | farhampton - coba - beli - terimakasih - bonus | 86 | 9_farhampton_coba_beli_terimakasih |
| 10 | delta - theta - alpha - sama - suka | 85 | 10_delta_theta_alpha_sama |
| 11 | tidak - debat - ekspektasi - no - nget | 83 | 11_tidak_debat_ekspektasi_no |
| 12 | magnet - tutup - botol - ada - kemasan | 80 | 12_magnet_tutup_botol_ada |
| 13 | belum - unboxing - buka - tidak - bocor | 79 | 13_belum_unboxing_buka_tidak |
| 14 | indonesia - bangga - produk - buatan - terus | 73 | 14_indonesia_bangga_produk_buatan |
| 15 | repeat - order - kesekian - kali - selalu | 70 | 15_repeat_order_kesekian_kali |
| 16 | tester - test - sample - dapat - bonus | 67 | 16_tester_test_sample_dapat |
| 17 | ketiga - tiga - beli - kali - cny | 59 | 17_ketiga_tiga_beli_kali |
| 18 | selamat - barang - dengan - mendarat - sampai | 57 | 18_selamat_barang_dengan_mendarat |
| 19 | lokal - produk - internasional - brand - kualitas | 53 | 19_lokal_produk_internasional_brand |
| 20 | starterpacking - beli - starterpack - coba - varian | 44 | 20_starterpacking_beli_starterpack_coba |
| 21 | spray - macet - rusak - keras - berfungsi | 44 | 21_spray_macet_rusak_keras |
| 22 | longevity - jam - projection - notes - silage | 42 | 22_longevity_jam_projection_notes |
| 23 | kartu - card - greting - ucap - tulis | 39 | 23_kartu_card_greting_ucap |
| 24 | tahan - lama - harum - disinj - horny | 32 | 24_tahan_lama_harum_disinj |
| 25 | love - just - swet - cinta - lembut | 31 | 25_love_just_swet_cinta |
| 26 | rekomendasi - gela - ank - bangsa - karya | 31 | 26_rekomendasi_gela_ank_bangsa |
| 27 | proses - chat - lama - kirim - pengiriman | 31 | 27_proses_chat_lama_kirim |
| 28 | matahari - esence - sun - sari - ber | 30 | 28_matahari_esence_sun_sari |
| 29 | rekomendasi - sangat - super - layak - direkomendasi | 29 | 29_rekomendasi_sangat_super_layak |
| 30 | blind - beli - muantep - ekspektasi - sisi | 29 | 30_blind_beli_muantep_ekspektasi |
| 31 | istri - hadiah - menyukainya - suami - buat | 28 | 31_istri_hadiah_menyukainya_suami |
| 32 | harum - sue - hua - unik - tagih | 28 | 32_harum_sue_hua_unik |
| 33 | bicara - bintang - biar - alasanya - nyang | 28 | 33_bicara_bintang_biar_alasanya |
| 34 | worth - it - layak - lumayan - harga | 26 | 34_worth_it_layak_lumayan |
| 35 | atomizer - dapat - bonus - 10ml - praktis | 25 | 35_atomizer_dapat_bonus_10ml |
| 36 | hadiah - tahu - buka - belum - semoga | 25 | 36_hadiah_tahu_buka_belum |
| 37 | ragu - perlu - kecewa - tidak - rugi | 24 | 37_ragu_perlu_kecewa_tidak |
| 38 | bintang - 10 - lima - kasih - sebab | 23 | 38_bintang_10_lima_kasih |
| 39 | segar - harum - ambune - ciamik - memuaskan | 22 | 39_segar_harum_ambune_ciamik |
| 40 | tim - hmns - terimakasih - kami - telah | 21 | 40_tim_hmns_terimakasih_kami |
| 41 | twitter - review - instagram - tws - decants | 20 | 41_twitter_review_instagram_tws |
| 42 | pokok - mantap - sampai - cepat - suka | 20 | 42_pokok_mantap_sampai_cepat |
| 43 | travel - ukuran - kecil - botle - frhmptn | 20 | 43_travel_ukuran_kecil_botle |
| 44 | eos - sama - suka - adict - zodiak | 20 | 44_eos_sama_suka_adict |
| 45 | notes - base - printilan - midlle - note | 20 | 45_notes_base_printilan_midlle |
| 46 | bangga - lokal - pride - kualitas - bagus | 20 | 46_bangga_lokal_pride_kualitas |
| 47 | cowok - cocok - spesial - masculine - dimainkan | 19 | 47_cowok_cocok_spesial_masculine |
| 48 | rapi - packing - trusted - ramah - aman | 18 | 48_rapi_packing_trusted_ramah |
| 49 | sukses - terus - lapaknya - inovasi - selalu | 18 | 49_sukses_terus_lapaknya_inovasi |
| 50 | mantan - ayah - abang - dia - masjid | 18 | 50_mantan_ayah_abang_dia |
| 51 | first - wekend - again - dicuekin - tunangan | 18 | 51_first_wekend_again_dicuekin |
| 52 | paket - mudik - maf - buka - selamat | 17 | 52_paket_mudik_maf_buka |
| 53 | bonus - dapat - gantengs - dengen - nembak | 15 | 53_bonus_dapat_gantengs_dengen |
| 54 | premium - packinya - 30k - perbedan - kemasan | 14 | 54_premium_packinya_30k_perbedan |
| 55 | obat - ada - tidak - bets - develop | 14 | 55_obat_ada_tidak_bets |
| 56 | projection - kurang - menyerang - meter - palagi | 14 | 56_projection_kurang_menyerang_meter |
| 57 | coconut - kelapa - pasion - vs - panas | 14 | 57_coconut_kelapa_pasion_vs |
| 58 | cewek - pacar - klepek - senenk - comunikasi | 14 | 58_cewek_pacar_klepek_senenk |
| 59 | pertama - mengoda - kesan - kali - pengatur | 13 | 59_pertama_mengoda_kesan_kali |
| 60 | ekspedisi - lambat - expedisi - hub - langsung | 13 | 60_ekspedisi_lambat_expedisi_hub |
| 61 | tunggu - akhirnya - datang - ye - uh | 13 | 61_tunggu_akhirnya_datang_ye |
| 62 | produk - dikolom - tanya - segini - kabari | 13 | 62_produk_dikolom_tanya_segini |
| 63 | jiwa - mantap - kesegaran - - | 12 | 63_jiwa_mantap_kesegaran_ |
| 64 | darker - warna - hitam - kuning - dof | 12 | 64_darker_warna_hitam_kuning |
| 65 | mewah - limo - seksi - lux - harum | 11 | 65_mewah_limo_seksi_lux |
| 66 | batch - oker - kelewatan - final - bagi | 11 | 66_batch_oker_kelewatan_final |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: indonesian
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: auto
* seed_topic_list: None
* top_n_words: 10
* verbose: False
## Framework versions
* Numpy: 1.23.5
* HDBSCAN: 0.8.29
* UMAP: 0.5.3
* Pandas: 1.5.3
* Scikit-Learn: 1.2.1
* Sentence-transformers: 2.2.2
* Transformers: 4.24.0
* Numba: 0.56.4
* Plotly: 5.9.0
* Python: 3.10.9
|
rtyui123/ppo-Huggy
|
rtyui123
| 2023-07-11T08:46:48Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-07-10T07:20:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: rtyui123/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k
|
NasimB
| 2023-07-11T08:45:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T07:01:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-all-mod-datasets1-rarity-all-iorder-c13k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.7811 | 0.32 | 500 | 5.6598 |
| 5.4368 | 0.63 | 1000 | 5.2297 |
| 5.0819 | 0.95 | 1500 | 4.9819 |
| 4.8064 | 1.27 | 2000 | 4.8391 |
| 4.6653 | 1.58 | 2500 | 4.7273 |
| 4.5682 | 1.9 | 3000 | 4.6197 |
| 4.3541 | 2.22 | 3500 | 4.5701 |
| 4.2704 | 2.53 | 4000 | 4.5079 |
| 4.2264 | 2.85 | 4500 | 4.4351 |
| 4.051 | 3.17 | 5000 | 4.4290 |
| 3.9415 | 3.49 | 5500 | 4.3896 |
| 3.9311 | 3.8 | 6000 | 4.3596 |
| 3.8035 | 4.12 | 6500 | 4.3598 |
| 3.6487 | 4.44 | 7000 | 4.3523 |
| 3.6387 | 4.75 | 7500 | 4.3363 |
| 3.5857 | 5.07 | 8000 | 4.3408 |
| 3.4463 | 5.39 | 8500 | 4.3415 |
| 3.4459 | 5.7 | 9000 | 4.3420 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
custads23/pygmalion-1.3b
|
custads23
| 2023-07-11T08:41:16Z | 19 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"gpt_neox",
"text-generation",
"text generation",
"conversational",
"en",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T08:40:26Z |
---
license: agpl-3.0
language:
- en
thumbnail: null
tags:
- text generation
- conversational
inference: true
duplicated_from: PygmalionAI/pygmalion-1.3b
---
# Pygmalion 1.3B
## Model description
Pymalion 1.3B is a proof-of-concept dialogue model based on EleutherAI's [pythia-1.3b-deduped](https://huggingface.co/EleutherAI/pythia-1.3b-deduped).
**Warning:** This model is **NOT** suitable for use by minors. It **will** output X-rated content under certain circumstances.
## Training data
The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real _and_ partially machine-generated conversations.
## Training procedure
Fine-tuning was done using [ColossalAI](https://github.com/hpcaitech/ColossalAI) (specifically, with a slightly modified version of their [OPT fine-tune example](https://github.com/hpcaitech/ColossalAI/blob/78509124d32b63b7fc36f6508e0576a326d51422/examples/language/opt/run_clm.py)) for around 11.4 million tokens over 5440 steps on a single 24GB GPU. The run took just under 21 hours.
## Intended use
### The easy way
We provide a notebook with a Gradio UI for playing around with the model without having to manually format inputs. This notebook can be found [here](https://github.com/PygmalionAI/gradio-ui/blob/master/notebooks/GPU.ipynb).
### The manual way
The model can be used as a regular text generation model, but it'll perform best if the input prompt adheres to the following format:
```
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
[DIALOGUE HISTORY]
You: [Your input message here]
[CHARACTER]:
```
Where `[CHARACTER] `is, as you can probably guess, the name of the character you want the model to portray, and `[DIALOGUE HISTORY]` is chat history so the model can have some conversational context to draw from. Ideally it'll be pairs of messages like:
```
[CHARACTER]: [some dialogue here]
You: [your response to the dialogue above]
```
Apart from chat history, you can also just add example conversations in `[DIALOGUE HISTORY]` to show how the character should speak - ideally at the beginning, so it doesn't get confused as to what's conversation history vs. character definition.
## Known issues
- The model can get stuck repeating certain phrases, or sometimes even entire sentences.
- We believe this is due to that behavior being present in the training data itself, and plan to investigate and adjust accordingly for future versions.
|
KennethTM/gpt2-small-danish
|
KennethTM
| 2023-07-11T08:37:00Z | 193 | 3 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"dataset:oscar",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-17T17:51:54Z |
---
datasets:
- oscar
language:
- da
widget:
- text: Der var engang
---
# What is this?
A GPT-2 model (small version, 124 M parameters) for Danish text generation. The model was not pre-trained from scratch but adapted from the English version.
# How to use
Test the model using the pipeline from the [🤗 Transformers](https://github.com/huggingface/transformers) library:
```python
from transformers import pipeline
generator = pipeline("text-generation", model = "KennethTM/gpt2-small-danish")
text = generator("Manden arbejdede som")
print(text[0]["generated_text"])
```
Or load it using the Auto* classes:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("KennethTM/gpt2-small-danish")
model = AutoModelForCausalLM.from_pretrained("KennethTM/gpt2-small-danish")
```
# Model training
The model is trained using the Danish part of the [oscar dataset](https://huggingface.co/datasets/oscar) ('unshuffled_deduplicated_da') and a context length of 1024 tokens.
The model weights are initialized from the English [GPT-2 small model](https://huggingface.co/gpt2) with new word token embeddings created for Danish using [WECHSEL](https://github.com/CPJKU/wechsel).
Initially, only the word token embeddings are trained using 50.000 samples. Finally, the whole model is trained using 1.000.000 samples.
For reference, the model achieves a perplexity of 33.5 on 5.000 random validation samples.
Model training is carried out on an 8 GB GPU.
# Notes
This is a pre-trained model, for optimal performance it should be finetuned for new tasks.
|
PhysHunter/marian-finetuned-kde4-en-to-fr
|
PhysHunter
| 2023-07-11T08:30:02Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-07-11T06:39:45Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.748370858180806
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.7484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
james-xie-rng/whisper-small-voip
|
james-xie-rng
| 2023-07-11T08:28:14Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"en",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-06-24T19:39:00Z |
---
language:
- en
metrics:
- wer
pipeline_tag: automatic-speech-recognition
---
|
Vangmayy/CharacterClassification
|
Vangmayy
| 2023-07-11T08:12:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-09T10:15:04Z |
# Character-Classification
A model that is trained to classify different characters from the office
|
nitzankarby/my-ppo-lunarLander-model
|
nitzankarby
| 2023-07-11T08:01:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T07:47:28Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 244.39 +/- 13.39
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
EmirhanExecute/Taxi-v3
|
EmirhanExecute
| 2023-07-11T07:57:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T07:57:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="EmirhanExecute/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
NasimB/gpt2-concat-cbt-mod-formatting-rarity-all-no-cut
|
NasimB
| 2023-07-11T07:50:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T06:00:27Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-cbt-mod-formatting-rarity-all-no-cut
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-cbt-mod-formatting-rarity-all-no-cut
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6908 | 0.29 | 500 | 5.6370 |
| 5.3424 | 0.58 | 1000 | 5.2171 |
| 4.9975 | 0.87 | 1500 | 4.9619 |
| 4.7162 | 1.17 | 2000 | 4.8140 |
| 4.5583 | 1.46 | 2500 | 4.6912 |
| 4.4472 | 1.75 | 3000 | 4.5875 |
| 4.3333 | 2.04 | 3500 | 4.5046 |
| 4.1323 | 2.33 | 4000 | 4.4556 |
| 4.0941 | 2.62 | 4500 | 4.3964 |
| 4.0666 | 2.92 | 5000 | 4.3435 |
| 3.8641 | 3.21 | 5500 | 4.3427 |
| 3.7979 | 3.5 | 6000 | 4.3080 |
| 3.7842 | 3.79 | 6500 | 4.2744 |
| 3.6961 | 4.08 | 7000 | 4.2720 |
| 3.5126 | 4.37 | 7500 | 4.2677 |
| 3.5148 | 4.66 | 8000 | 4.2523 |
| 3.4967 | 4.96 | 8500 | 4.2389 |
| 3.3496 | 5.25 | 9000 | 4.2522 |
| 3.3196 | 5.54 | 9500 | 4.2514 |
| 3.3175 | 5.83 | 10000 | 4.2504 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
sipablo/gatau
|
sipablo
| 2023-07-11T07:41:52Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T07:41:52Z |
---
license: creativeml-openrail-m
---
|
Leonardolin/insurance_multiple_label_my83
|
Leonardolin
| 2023-07-11T07:15:46Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T04:00:14Z |
how to use will be provide later
tokenizer = bert base chinese tokenizer fast
|
jordyvl/vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.5
|
jordyvl
| 2023-07-11T07:11:21Z | 162 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-07-11T05:58:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-small_rvl_cdip_100_examples_per_class_kd_CEKD_t5.0_a0.5
This model is a fine-tuned version of [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2623
- Accuracy: 0.65
- Brier Loss: 0.4803
- Nll: 3.2676
- F1 Micro: 0.65
- F1 Macro: 0.6575
- Ece: 0.1722
- Aurc: 0.1414
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:----------:|:-------:|:--------:|:--------:|:------:|:------:|
| No log | 1.0 | 25 | 3.4916 | 0.1075 | 0.9342 | 15.2561 | 0.1075 | 0.1132 | 0.1627 | 0.8874 |
| No log | 2.0 | 50 | 2.6905 | 0.395 | 0.7423 | 8.7655 | 0.395 | 0.3694 | 0.1922 | 0.3538 |
| No log | 3.0 | 75 | 2.1229 | 0.505 | 0.6157 | 5.2850 | 0.505 | 0.4830 | 0.1716 | 0.2424 |
| No log | 4.0 | 100 | 1.9322 | 0.55 | 0.5842 | 4.6402 | 0.55 | 0.5501 | 0.1744 | 0.2156 |
| No log | 5.0 | 125 | 1.8231 | 0.5575 | 0.5788 | 4.2830 | 0.5575 | 0.5494 | 0.1777 | 0.2091 |
| No log | 6.0 | 150 | 1.7318 | 0.5875 | 0.5523 | 4.4127 | 0.5875 | 0.5864 | 0.1686 | 0.1950 |
| No log | 7.0 | 175 | 1.6652 | 0.615 | 0.5325 | 3.8720 | 0.615 | 0.6192 | 0.1654 | 0.1740 |
| No log | 8.0 | 200 | 1.5910 | 0.61 | 0.5233 | 3.2435 | 0.61 | 0.6097 | 0.1556 | 0.1702 |
| No log | 9.0 | 225 | 1.7751 | 0.59 | 0.5610 | 3.9627 | 0.59 | 0.5839 | 0.1932 | 0.1965 |
| No log | 10.0 | 250 | 1.5950 | 0.5975 | 0.5521 | 3.9360 | 0.5975 | 0.5922 | 0.1868 | 0.1886 |
| No log | 11.0 | 275 | 1.6105 | 0.6 | 0.5459 | 4.2017 | 0.6 | 0.5960 | 0.1788 | 0.1696 |
| No log | 12.0 | 300 | 1.5566 | 0.5975 | 0.5283 | 3.6344 | 0.5975 | 0.5957 | 0.1843 | 0.1758 |
| No log | 13.0 | 325 | 1.5395 | 0.6225 | 0.5344 | 3.3755 | 0.6225 | 0.6327 | 0.1725 | 0.1721 |
| No log | 14.0 | 350 | 1.5117 | 0.64 | 0.5193 | 3.7990 | 0.64 | 0.6366 | 0.1849 | 0.1659 |
| No log | 15.0 | 375 | 1.5274 | 0.6225 | 0.5381 | 3.5126 | 0.6225 | 0.6198 | 0.1837 | 0.1689 |
| No log | 16.0 | 400 | 1.3822 | 0.645 | 0.4848 | 3.5167 | 0.645 | 0.6501 | 0.1426 | 0.1384 |
| No log | 17.0 | 425 | 1.4390 | 0.6325 | 0.5345 | 3.8558 | 0.6325 | 0.6406 | 0.1859 | 0.1624 |
| No log | 18.0 | 450 | 1.3763 | 0.6425 | 0.4905 | 3.0232 | 0.6425 | 0.6446 | 0.1687 | 0.1388 |
| No log | 19.0 | 475 | 1.5017 | 0.5925 | 0.5558 | 3.9738 | 0.5925 | 0.5699 | 0.2064 | 0.1827 |
| 0.7312 | 20.0 | 500 | 1.4216 | 0.64 | 0.5092 | 3.5054 | 0.64 | 0.6394 | 0.1885 | 0.1583 |
| 0.7312 | 21.0 | 525 | 1.3999 | 0.6325 | 0.5166 | 3.6206 | 0.6325 | 0.6342 | 0.1865 | 0.1586 |
| 0.7312 | 22.0 | 550 | 1.3555 | 0.6575 | 0.5092 | 3.5815 | 0.6575 | 0.6570 | 0.1748 | 0.1565 |
| 0.7312 | 23.0 | 575 | 1.3915 | 0.6375 | 0.5065 | 3.2269 | 0.6375 | 0.6367 | 0.1712 | 0.1485 |
| 0.7312 | 24.0 | 600 | 1.4116 | 0.64 | 0.5130 | 3.7646 | 0.64 | 0.6412 | 0.1690 | 0.1624 |
| 0.7312 | 25.0 | 625 | 1.3663 | 0.64 | 0.5160 | 3.0397 | 0.64 | 0.6471 | 0.1736 | 0.1575 |
| 0.7312 | 26.0 | 650 | 1.3717 | 0.63 | 0.5097 | 3.7950 | 0.63 | 0.6379 | 0.1823 | 0.1570 |
| 0.7312 | 27.0 | 675 | 1.3229 | 0.6425 | 0.4933 | 3.5568 | 0.6425 | 0.6498 | 0.1564 | 0.1470 |
| 0.7312 | 28.0 | 700 | 1.3638 | 0.6275 | 0.5124 | 3.2988 | 0.6275 | 0.6266 | 0.1916 | 0.1600 |
| 0.7312 | 29.0 | 725 | 1.3353 | 0.6475 | 0.5013 | 3.4126 | 0.6475 | 0.6407 | 0.1747 | 0.1558 |
| 0.7312 | 30.0 | 750 | 1.3788 | 0.6325 | 0.5172 | 3.4229 | 0.6325 | 0.6329 | 0.1629 | 0.1650 |
| 0.7312 | 31.0 | 775 | 1.3021 | 0.6525 | 0.4840 | 3.2418 | 0.6525 | 0.6571 | 0.1788 | 0.1412 |
| 0.7312 | 32.0 | 800 | 1.3127 | 0.6525 | 0.5058 | 3.1876 | 0.6525 | 0.6579 | 0.1879 | 0.1525 |
| 0.7312 | 33.0 | 825 | 1.3181 | 0.64 | 0.5023 | 3.1837 | 0.64 | 0.6459 | 0.1751 | 0.1529 |
| 0.7312 | 34.0 | 850 | 1.3071 | 0.6425 | 0.4954 | 3.5271 | 0.6425 | 0.6480 | 0.1615 | 0.1496 |
| 0.7312 | 35.0 | 875 | 1.2808 | 0.655 | 0.4904 | 3.2539 | 0.655 | 0.6606 | 0.1725 | 0.1448 |
| 0.7312 | 36.0 | 900 | 1.2766 | 0.68 | 0.4771 | 3.3397 | 0.68 | 0.6823 | 0.1645 | 0.1408 |
| 0.7312 | 37.0 | 925 | 1.2751 | 0.665 | 0.4837 | 3.3390 | 0.665 | 0.6728 | 0.1723 | 0.1446 |
| 0.7312 | 38.0 | 950 | 1.2658 | 0.67 | 0.4791 | 3.2603 | 0.67 | 0.6760 | 0.1781 | 0.1407 |
| 0.7312 | 39.0 | 975 | 1.2678 | 0.66 | 0.4814 | 3.1865 | 0.66 | 0.6682 | 0.1585 | 0.1414 |
| 0.0683 | 40.0 | 1000 | 1.2737 | 0.66 | 0.4840 | 3.3466 | 0.66 | 0.6658 | 0.1699 | 0.1434 |
| 0.0683 | 41.0 | 1025 | 1.2581 | 0.66 | 0.4769 | 3.1757 | 0.66 | 0.6660 | 0.1752 | 0.1398 |
| 0.0683 | 42.0 | 1050 | 1.2734 | 0.655 | 0.4833 | 3.1843 | 0.655 | 0.6600 | 0.1721 | 0.1440 |
| 0.0683 | 43.0 | 1075 | 1.2628 | 0.66 | 0.4802 | 3.2578 | 0.66 | 0.6670 | 0.1789 | 0.1403 |
| 0.0683 | 44.0 | 1100 | 1.2717 | 0.66 | 0.4837 | 3.2573 | 0.66 | 0.6651 | 0.1584 | 0.1433 |
| 0.0683 | 45.0 | 1125 | 1.2637 | 0.6475 | 0.4791 | 3.3419 | 0.6475 | 0.6545 | 0.1736 | 0.1408 |
| 0.0683 | 46.0 | 1150 | 1.2625 | 0.6575 | 0.4797 | 3.3403 | 0.6575 | 0.6642 | 0.1597 | 0.1406 |
| 0.0683 | 47.0 | 1175 | 1.2642 | 0.6525 | 0.4791 | 3.3527 | 0.6525 | 0.6592 | 0.1731 | 0.1416 |
| 0.0683 | 48.0 | 1200 | 1.2652 | 0.655 | 0.4816 | 3.2664 | 0.655 | 0.6623 | 0.1717 | 0.1413 |
| 0.0683 | 49.0 | 1225 | 1.2646 | 0.65 | 0.4806 | 3.3371 | 0.65 | 0.6568 | 0.1758 | 0.1419 |
| 0.0683 | 50.0 | 1250 | 1.2677 | 0.65 | 0.4812 | 3.4189 | 0.65 | 0.6575 | 0.1582 | 0.1427 |
| 0.0683 | 51.0 | 1275 | 1.2657 | 0.65 | 0.4813 | 3.3393 | 0.65 | 0.6565 | 0.1748 | 0.1413 |
| 0.0683 | 52.0 | 1300 | 1.2648 | 0.655 | 0.4813 | 3.3447 | 0.655 | 0.6629 | 0.1627 | 0.1419 |
| 0.0683 | 53.0 | 1325 | 1.2650 | 0.65 | 0.4813 | 3.3350 | 0.65 | 0.6565 | 0.1780 | 0.1414 |
| 0.0683 | 54.0 | 1350 | 1.2593 | 0.655 | 0.4790 | 3.3427 | 0.655 | 0.6620 | 0.1543 | 0.1399 |
| 0.0683 | 55.0 | 1375 | 1.2648 | 0.6525 | 0.4810 | 3.3368 | 0.6525 | 0.6592 | 0.1723 | 0.1414 |
| 0.0683 | 56.0 | 1400 | 1.2608 | 0.6525 | 0.4802 | 3.2599 | 0.6525 | 0.6603 | 0.1738 | 0.1411 |
| 0.0683 | 57.0 | 1425 | 1.2639 | 0.6525 | 0.4799 | 3.3437 | 0.6525 | 0.6599 | 0.1767 | 0.1413 |
| 0.0683 | 58.0 | 1450 | 1.2631 | 0.65 | 0.4810 | 3.3401 | 0.65 | 0.6578 | 0.1667 | 0.1416 |
| 0.0683 | 59.0 | 1475 | 1.2636 | 0.6525 | 0.4803 | 3.3411 | 0.6525 | 0.6594 | 0.1690 | 0.1416 |
| 0.0391 | 60.0 | 1500 | 1.2618 | 0.6525 | 0.4796 | 3.2684 | 0.6525 | 0.6600 | 0.1813 | 0.1413 |
| 0.0391 | 61.0 | 1525 | 1.2636 | 0.6525 | 0.4807 | 3.2704 | 0.6525 | 0.6595 | 0.1673 | 0.1413 |
| 0.0391 | 62.0 | 1550 | 1.2615 | 0.65 | 0.4794 | 3.2662 | 0.65 | 0.6575 | 0.1741 | 0.1413 |
| 0.0391 | 63.0 | 1575 | 1.2630 | 0.65 | 0.4803 | 3.3417 | 0.65 | 0.6575 | 0.1752 | 0.1411 |
| 0.0391 | 64.0 | 1600 | 1.2618 | 0.65 | 0.4801 | 3.2663 | 0.65 | 0.6575 | 0.1770 | 0.1413 |
| 0.0391 | 65.0 | 1625 | 1.2622 | 0.65 | 0.4802 | 3.2698 | 0.65 | 0.6575 | 0.1686 | 0.1412 |
| 0.0391 | 66.0 | 1650 | 1.2622 | 0.65 | 0.4802 | 3.3400 | 0.65 | 0.6575 | 0.1922 | 0.1412 |
| 0.0391 | 67.0 | 1675 | 1.2625 | 0.65 | 0.4802 | 3.2694 | 0.65 | 0.6575 | 0.1801 | 0.1413 |
| 0.0391 | 68.0 | 1700 | 1.2626 | 0.65 | 0.4803 | 3.2683 | 0.65 | 0.6575 | 0.1656 | 0.1414 |
| 0.0391 | 69.0 | 1725 | 1.2631 | 0.65 | 0.4806 | 3.2696 | 0.65 | 0.6575 | 0.1722 | 0.1413 |
| 0.0391 | 70.0 | 1750 | 1.2622 | 0.65 | 0.4802 | 3.2688 | 0.65 | 0.6575 | 0.1812 | 0.1412 |
| 0.0391 | 71.0 | 1775 | 1.2626 | 0.65 | 0.4803 | 3.2676 | 0.65 | 0.6575 | 0.1845 | 0.1412 |
| 0.0391 | 72.0 | 1800 | 1.2621 | 0.65 | 0.4801 | 3.2683 | 0.65 | 0.6575 | 0.1805 | 0.1411 |
| 0.0391 | 73.0 | 1825 | 1.2626 | 0.65 | 0.4804 | 3.2683 | 0.65 | 0.6575 | 0.1665 | 0.1413 |
| 0.0391 | 74.0 | 1850 | 1.2624 | 0.65 | 0.4803 | 3.2686 | 0.65 | 0.6575 | 0.1773 | 0.1412 |
| 0.0391 | 75.0 | 1875 | 1.2624 | 0.65 | 0.4803 | 3.2682 | 0.65 | 0.6575 | 0.1807 | 0.1412 |
| 0.0391 | 76.0 | 1900 | 1.2627 | 0.65 | 0.4804 | 3.2680 | 0.65 | 0.6575 | 0.1732 | 0.1414 |
| 0.0391 | 77.0 | 1925 | 1.2625 | 0.65 | 0.4803 | 3.2673 | 0.65 | 0.6575 | 0.1715 | 0.1412 |
| 0.0391 | 78.0 | 1950 | 1.2623 | 0.65 | 0.4803 | 3.2681 | 0.65 | 0.6575 | 0.1840 | 0.1413 |
| 0.0391 | 79.0 | 1975 | 1.2624 | 0.65 | 0.4803 | 3.2678 | 0.65 | 0.6575 | 0.1773 | 0.1413 |
| 0.0385 | 80.0 | 2000 | 1.2625 | 0.65 | 0.4803 | 3.2686 | 0.65 | 0.6575 | 0.1802 | 0.1414 |
| 0.0385 | 81.0 | 2025 | 1.2625 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1773 | 0.1413 |
| 0.0385 | 82.0 | 2050 | 1.2625 | 0.65 | 0.4803 | 3.2684 | 0.65 | 0.6575 | 0.1802 | 0.1414 |
| 0.0385 | 83.0 | 2075 | 1.2624 | 0.65 | 0.4803 | 3.2679 | 0.65 | 0.6575 | 0.1823 | 0.1413 |
| 0.0385 | 84.0 | 2100 | 1.2623 | 0.65 | 0.4803 | 3.2681 | 0.65 | 0.6575 | 0.1772 | 0.1413 |
| 0.0385 | 85.0 | 2125 | 1.2624 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 86.0 | 2150 | 1.2625 | 0.65 | 0.4804 | 3.2680 | 0.65 | 0.6575 | 0.1751 | 0.1414 |
| 0.0385 | 87.0 | 2175 | 1.2623 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1772 | 0.1413 |
| 0.0385 | 88.0 | 2200 | 1.2624 | 0.65 | 0.4803 | 3.2676 | 0.65 | 0.6575 | 0.1723 | 0.1414 |
| 0.0385 | 89.0 | 2225 | 1.2623 | 0.65 | 0.4803 | 3.2679 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 90.0 | 2250 | 1.2622 | 0.65 | 0.4802 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1413 |
| 0.0385 | 91.0 | 2275 | 1.2623 | 0.65 | 0.4803 | 3.2678 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 92.0 | 2300 | 1.2624 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 93.0 | 2325 | 1.2623 | 0.65 | 0.4803 | 3.2679 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 94.0 | 2350 | 1.2623 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 95.0 | 2375 | 1.2623 | 0.65 | 0.4803 | 3.2676 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 96.0 | 2400 | 1.2623 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 97.0 | 2425 | 1.2623 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 98.0 | 2450 | 1.2623 | 0.65 | 0.4803 | 3.2677 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 99.0 | 2475 | 1.2623 | 0.65 | 0.4803 | 3.2676 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
| 0.0385 | 100.0 | 2500 | 1.2623 | 0.65 | 0.4803 | 3.2676 | 0.65 | 0.6575 | 0.1722 | 0.1414 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.12.0
- Tokenizers 0.12.1
|
KalyanKumar/roberta-finetuned-subjqa-movies_2
|
KalyanKumar
| 2023-07-11T07:04:13Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-07-11T06:53:31Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-subjqa-movies_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hongrui/mammogram_v_2_2_2
|
hongrui
| 2023-07-11T06:49:08Z | 4 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-10T23:17:48Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2_2
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
akuan123/RegionCLIP
|
akuan123
| 2023-07-11T06:40:06Z | 0 | 0 | null |
[
"arxiv:2112.09106",
"region:us"
] | null | 2023-07-11T06:24:40Z |
# RegionCLIP: Region-based Language-Image Pretraining
This is the official PyTorch implementation of RegionCLIP (CVPR 2022).
[**Paper**](https://arxiv.org/abs/2112.09106) | [**Demo on Hugging Face**](https://huggingface.co/spaces/CVPR/regionclip-demo) | [**Slides**](https://drive.google.com/file/d/1EepNVJGo_d73Glr4vNjR4Av0dNkBCGcj/view?usp=sharing)
> **RegionCLIP: Region-based Language-Image Pretraining (CVPR 2022)** <br>
> [Yiwu Zhong](https://pages.cs.wisc.edu/~yiwuzhong/), [Jianwei Yang](https://jwyang.github.io/), [Pengchuan Zhang](https://pzzhang.github.io/pzzhang/), [Chunyuan Li](https://chunyuan.li/), [Noel Codella](https://noelcodella.github.io/publicwebsite/), [Liunian Li](https://liunian-harold-li.github.io/), [Luowei Zhou](https://luoweizhou.github.io/), [Xiyang Dai](https://sites.google.com/site/xiyangdai/), [Lu Yuan](https://scholar.google.com/citations?user=k9TsUVsAAAAJ&hl=en), [Yin Li](https://www.biostat.wisc.edu/~yli/), and [Jianfeng Gao](https://www.microsoft.com/en-us/research/people/jfgao/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fjfgao%2F) <br>
<p align="center">
<img src="docs/regionclip.png" width=80% height=80%
class="center">
</p>
## Overview
We propose RegionCLIP that significantly extends CLIP to learn region-level visual representations. RegionCLIP enables fine-grained alignment between image regions and textual concepts, and thus supports region-based reasoning tasks including zero-shot object detection and open-vocabulary object detection.
- **Pretraining**: We leverage a CLIP model to match image regions with template captions, and then pretrain our model to align these region-text pairs.
- **Zero-shot inference**: Once pretrained, the learned region representations support zero-shot inference for object detection.
- **Transfer learning**: The learned RegionCLIP model can be further fine-tuned with additional object detection annotations, allowing our model to be used for fully supervised or open-vocabulary object detection.
- **Results**: Our method demonstrates **state-of-the-art** results for zero-shot object detection and open-vocabulary object detection.
## Updates
* :collision: [10/05/2022] RegionCLIP now supports not only resnet but also many vision transformers (e.g., vit, swin, davit, focalnet) for zero-shot object detection! Please checkout the [zero-shot branch](https://github.com/microsoft/RegionCLIP/tree/zero-shot)!
* [09/23/2022] As requested by researchers, we release the [configs](configs/pretrain) and [scripts](pretrain.sh) of pre-training. A full tutorial and pre-training data will be released later. Stay tuned!
* [09/18/2022] Organizing ECCV Workshop [Computer Vision in the Wild (CVinW)](https://computer-vision-in-the-wild.github.io/eccv-2022/), where two challenges are hosted to evaluate the zero-shot, few-shot and full-shot performance of pre-trained vision models in downstream tasks:
- [Image Classification in the Wild (ICinW)](https://eval.ai/web/challenges/challenge-page/1832/overview) Challenge evaluates on 20 image classification tasks.
- [Object Detection in the Wild (ODinW)](https://eval.ai/web/challenges/challenge-page/1839/overview) Challenge evaluates on 35 object detection tasks.
* [07/11/2022] We included the scripts for concept feature extraction. It can be used for your own costomized concept pool!
* [07/07/2022] We included the scripts for region feature extraction. The extracted visual features can be used for various downstream tasks!
* [06/24/2022] We released [**a Web demo using Gradio on Hugging Face**](https://huggingface.co/spaces/CVPR/regionclip-demo). It uses our pretrained RegionCLIP for zero-shot inference. Check it out!
* [06/20/2022] We released models and inference code for our RegionCLIP!
## Outline
1. [Installation](#Installation)
2. [Datasets](#Datasets)
3. [Model Zoo](#Model-Zoo)
4. [Zero-shot Inference](#Zero-shot-Inference)
5. [Transfer Learning](#Transfer-Learning)
6. [Extract Region Features](#Extract-Region-Features)
7. [Extract Concept Features](#Extract-Concept-Features)
8. [Citation and Acknowledgement](#Citation-and-Acknowledgement)
9. [Contributing](#Contributing)
## Installation
Check [`INSTALL.md`](docs/INSTALL.md) for installation instructions.
## Datasets
Check [`datasets/README.md`](datasets/README.md) for dataset preparation.
## Model Zoo
Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) for our pretrained models.
## Zero-shot Inference
After pretraining, RegionCLIP can directly support the challenging zero-shot object detection task **without finetuning on detection annotation**. Given an input image, our pretrained RegionCLIP can match image region features to object concept embeddings, and thus recognize image regions into many object categories. The image regions are produced by a region localizer (e.g., RPN), where the object class names come from a dictionary **specifiied by users**.
### Visualization on custom images
We provide an example below for zero-shot object detection with pretrained RegionCLIP on custom images and for visualizing the results.
<details>
<summary>
Before detecting objects, please prepare pretrained models, label files, and the custom images. See details below.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the pretrained model checkpoint `regionclip_pretrained-cc_rn50x4.pth` (RegionCLIP with ResNet50x4) to the folder `./pretrained_ckpt/regionclip`.
- download the class embeddings `lvis_1203_cls_emb_rn50x4.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Check [`datasets/README.md`](datasets/README.md) to download LVIS label file `lvis_v1_val.json` and put it in the folder `./datasets/lvis/lvis_v1_val.json`. The file is used to specify object class names.
- Put all custom images in the folder `./datasets/custom_images/`.
</details>
<details>
<summary>
After preparation, run the following script to detect objects.
</summary>
```
python3 ./tools/train_net.py \
--eval-only \
--num-gpus 1 \
--config-file ./configs/LVISv1-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_custom_img.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50x4.pth \
MODEL.CLIP.TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/lvis_1203_cls_emb_rn50x4.pth \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
MODEL.CLIP.TEXT_EMB_DIM 640 \
MODEL.RESNETS.DEPTH 200 \
MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION 18 \
```
</details>
<details>
<summary>
The detection results will be stored as the file "./output/inference/lvis_instances_results.json". To visualize it, run the script below.
</summary>
```
python ./tools/visualize_json_results.py \
--input ./output/inference/lvis_instances_results.json \
--output ./output/regions \
--dataset lvis_v1_val_custom_img \
--conf-threshold 0.05 \
--show-unique-boxes \
--max-boxes 25 \
--small-region-px 8100\
```
</details>
The visualized images will be placed at `./output/regions/`. The visualized images would look like:
<p align="center">
<img src="docs/sample_img1_vis.jpg" width=80% height=80%
class="center">
</p>
In this example, the detection results come from our pretrained RegionCLIP with ResNet50x4 architecture. The regions are proposed by an RPN trained by 866 object categories from LVIS dataset. For now, we use 1203 object class names from LVIS dataset for this visualization example. We also include an example in `visualize_zeroshot_inference.sh` with our pretrained RegionCLIP (ResNet50 architecture).
### Evaluation for zero-shot inference
We provide an example below for evaluating our pretrained RegionCLIP (ResNet50) using ground-truth boxes on COCO dataset. This will reproduce our results in Table 4 of the paper.
<details>
<summary>
Before evaluation, please prepare pretrained models and set up the dataset.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the pretrained RegionCLIP checkpoint `regionclip_pretrained-cc_rn50.pth` to the folder `./pretrained_ckpt/regionclip`.
- download the class embeddings `coco_65_cls_emb.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Check [`datasets/README.md`](datasets/README.md) to set up COCO dataset.
</details>
<details>
<summary>
After preparation, run the following script to evaluate the pretrained model in zero-shot inference setting.
</summary>
```
python3 ./tools/train_net.py \
--eval-only \
--num-gpus 1 \
--config-file ./configs/COCO-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_ovd_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50.pth \
MODEL.CLIP.TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/coco_65_cls_emb.pth \
MODEL.CLIP.CROP_REGION_TYPE GT \
MODEL.CLIP.MULTIPLY_RPN_SCORE False \
```
</details>
For more examples, please refer to `test_zeroshot_inference.sh`. This script covers a wide combination of pretrained models (ResNet50, ResNet50x4), datasets (COCO, LVIS) and region proposal types (ground-truth regions, RPN proposals). Also, please refer to [MODEL_ZOO.md](docs/MODEL_ZOO.md) for available trained models and [`datasets/README.md`](datasets/README.md) for setting up COCO and LVIS datasets.
## Transfer Learning
Our pretrained RegionCLIP can be further **fine-tuned** when human annotations of objects are available. In this transfer learning setting, we demonstrate results on **open-vocabulary object detection**, where the object detector is trained on base categories and evaluated on both base and **novel** categories.
We show an example for running a trained detector on custom images. Further, we provide scripts of training and evaluation for the benchmark of **open-vocabulary object detection**, including COCO and LVIS datasets (Table 1 & 2 in paper).
### Visualization on custom images
We provide an example below for running a trained open-vocabulary object detector on custom images and for visualizing the results. In this example, the detector is initialized using RegionCLIP (RN50x4), trained on 866 LVIS base categories, and is tasked to detect all 1203 categories on LVIS.
<details>
<summary>
Before detecting objects, please prepare the trained detectors, label files, and the custom images.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the trained detector checkpoint `regionclip_finetuned-lvis_rn50x4.pth` to the folder `./pretrained_ckpt/regionclip`.
- download the trained RPN checkpoint `rpn_lvis_866_lsj.pth` to the folder `./pretrained_ckpt/rpn`.
- download the class embeddings `lvis_1203_cls_emb_rn50x4.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Check [`datasets/README.md`](datasets/README.md) to download label file `lvis_v1_val.json` and put it in the folder `./datasets/lvis/lvis_v1_val.json`.
- Put all custom images in the folder `./datasets/custom_images/`.
</details>
<details>
<summary>
After preparation, run the following script to detect objects and visualize the results.
</summary>
```
# for simplicity, we integrate the script in visualize_transfer_learning.sh
bash visualize_transfer_learning.sh
```
</details>
The visualized images will be placed at `./output/regions/`.
### Evaluate the trained detectors
We provide an example below for evaluating our open-vocabulary object detector, initialized by RegionCLIP (ResNet50) and trained on COCO dataset.
<details>
<summary>
Before evaluation, please prepare the trained detector and set up the dataset.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the trained detector checkpoint `regionclip_finetuned-coco_rn50.pth` to the folder `./pretrained_ckpt/regionclip`,
- download the trained RPN checkpoint `rpn_coco_48.pth` to the folder `./pretrained_ckpt/rpn`,
- download the class embeddings `coco_48_base_cls_emb.pth` and `coco_65_cls_emb.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Check [`datasets/README.md`](datasets/README.md) to set up COCO dataset.
</details>
<details>
<summary>
After preparation, run the following script to evaluate the trained open-vocabulary detector.
</summary>
```
python3 ./tools/train_net.py \
--eval-only \
--num-gpus 1 \
--config-file ./configs/COCO-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_ovd.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_finetuned-coco_rn50.pth \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x_ovd_FSD.yaml \
MODEL.CLIP.BB_RPN_WEIGHTS ./pretrained_ckpt/rpn/rpn_coco_48.pth \
MODEL.CLIP.TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/coco_48_base_cls_emb.pth \
MODEL.CLIP.OPENSET_TEST_TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/coco_65_cls_emb.pth \
MODEL.ROI_HEADS.SOFT_NMS_ENABLED True \
```
</details>
For more examples, please refer to `test_transfer_learning.sh`. This script includes benchmark evaluation for various combination of trained detectors (ResNet50, ResNet50x4) and datasets (COCO, LVIS). Also, please refer to [MODEL_ZOO.md](docs/MODEL_ZOO.md) for available trained models and [`datasets/README.md`](datasets/README.md) for setting up COCO and LVIS datasets.
### Train detectors on your own
We provide an example below for training an open-vocabulary object detector on COCO dataset, with pretrained RegionCLIP (ResNet50) as the initialization.
<details>
<summary>
Before training, please prepare our pretrained RegionCLIP model and set up the dataset.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the pretrained RegionCLIP checkpoint `regionclip_pretrained-cc_rn50.pth` to the folder `./pretrained_ckpt/regionclip`,
- download the trained RPN checkpoint `rpn_coco_48.pth` to the folder `./pretrained_ckpt/rpn`,
- download the class embeddings `coco_48_base_cls_emb.pth` and `coco_65_cls_emb.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Check [`datasets/README.md`](datasets/README.md) to set up COCO dataset.
</details>
<details>
<summary>
After preparation, run the following script to train an open-vocabulary detector.
</summary>
```
python3 ./tools/train_net.py \
--num-gpus 1 \
--config-file ./configs/COCO-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_ovd.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50.pth \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/COCO-InstanceSegmentation/mask_rcnn_R_50_C4_1x_ovd_FSD.yaml \
MODEL.CLIP.BB_RPN_WEIGHTS ./pretrained_ckpt/rpn/rpn_coco_48.pth \
MODEL.CLIP.TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/coco_48_base_cls_emb.pth \
MODEL.CLIP.OPENSET_TEST_TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/coco_65_cls_emb.pth \
```
</details>
For more examples, please refer to `train_transfer_learning.sh`. This script provides training scripts for various combination of detector backbones (ResNet50, ResNet50x4) and datasets (COCO, LVIS). Also, please refer to [MODEL_ZOO.md](docs/MODEL_ZOO.md) for available trained models and [`datasets/README.md`](datasets/README.md) for setting up COCO and LVIS datasets.
## Extract Region Features
We provide scripts for extracting region features from our pre-trained RegionCLIP. Given a folder of images, our scripts extract region features (along with other detection results such as box coordinates) and save them as local files.
The following is an example using pretrained RegionCLIP with ResNet50. We extend the scripts from zero-shot inference (section above) with minor changes, such as the input and output folders.
<details>
<summary>
The following is a brief introduction for the settings.
</summary>
We enable feature extraction for two types of regions:
- RPN regions: This setting is class-agnostic. The regions are the top-scored RPN proposals.
- Detection regions: This setting requires additional input as a concept embedding file (the concepts of interests). The regions are the final detection output boxes (after 2nd-stage NMS). As a reference, the [Bottom-Up features](https://openaccess.thecvf.com/content_cvpr_2018/papers/Anderson_Bottom-Up_and_Top-Down_CVPR_2018_paper.pdf) (widely-used in vision-language tasks) also come from the final detection boxes.
</details>
<details>
<summary>
Before running scripts, please prepare pretrained models and your custom images.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the pretrained RegionCLIP checkpoint `regionclip_pretrained-cc_rn50.pth` to the folder `./pretrained_ckpt/regionclip`.
- download the trained RPN checkpoint `rpn_lvis_866.pth` to the folder `./pretrained_ckpt/rpn`.
- (optional) if you want to extract features of the boxes detected for 1203 LVIS concepts, download the class embeddings `lvis_1203_cls_emb.pth` to the folder `./pretrained_ckpt/concept_emb`.
- Put all custom images in a folder. It can be specified in the script (check `INPUT_DIR` below).
</details>
<details>
<summary>
After preparation, run the following script to extract region features.
</summary>
The following script extracts features from **RPN regions**.
```
# RN50, features of RPN regions
python3 ./tools/extract_region_features.py \
--config-file ./configs/LVISv1-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50.pth \
MODEL.CLIP.CROP_REGION_TYPE RPN \
MODEL.CLIP.MULTIPLY_RPN_SCORE True \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
MODEL.CLIP.BB_RPN_WEIGHTS ./pretrained_ckpt/rpn/rpn_lvis_866.pth \
INPUT_DIR ./datasets/custom_images \
OUTPUT_DIR ./output/region_feats \
MODEL.CLIP.OFFLINE_RPN_POST_NMS_TOPK_TEST 100 \
```
The following script extracts features from **detection regions** (after 2nd-stage NMS).
```
# You can simply run "bash extract_region_features.sh"
python3 ./tools/extract_region_features.py \
--config-file ./configs/LVISv1-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50.pth \
MODEL.CLIP.TEXT_EMB_PATH ./pretrained_ckpt/concept_emb/lvis_1203_cls_emb.pth \
MODEL.CLIP.CROP_REGION_TYPE RPN \
MODEL.CLIP.MULTIPLY_RPN_SCORE True \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
MODEL.CLIP.BB_RPN_WEIGHTS ./pretrained_ckpt/rpn/rpn_lvis_866.pth \
INPUT_DIR ./datasets/custom_images \
OUTPUT_DIR ./output/region_feats \
TEST.DETECTIONS_PER_IMAGE 100 \
```
The region features of each image will be saved into a `.pth` file in the folder `OUTPUT_DIR`. For simplicity, the current script only supports single GPU inference. As a reference, it takes roughly 0.76 seconds on single Titan-Xp GPU with RegionCLIP-ResNet50 and 1203 LVIS object concepts.
The following is a list of key arguments for feature extraction. You can specify them in the script as needed.
- `INPUT_DIR` and `OUTPUT_DIR`: specify a folder of input images and an output folder where region features will be saved, respectively.
- `MODEL.CLIP.BB_RPN_WEIGHTS`: specifies which trained RPN to use. You can replace it as needed. For more details, please check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md).
- `MODEL.CLIP.TEXT_EMB_PATH` (optional): specifies which object concept embedding file to use. The selection of concepts will affect the per-class NMS (2nd stage) and thus final output boxes.
- `TEST.DETECTIONS_PER_IMAGE`: defines the number of final output regions (e.g., default value is 100 in COCO configs and 300 in LVIS configs)
- `MODEL.CLIP.OFFLINE_RPN_POST_NMS_TOPK_TEST`: defines the number of region proposals from RPN (e.g., default is 1000). Lowering this value can significantly reduce inference time and memory cost, but might affect the final detection quality.
- `MODEL.CLIP.OFFLINE_RPN_NMS_THRESH` and `MODEL.ROI_HEADS.NMS_THRESH_TEST`: control the NMS IoU thresholds in RPN (1st stage, default is 0.9) and prediction head (2nd stage, default is 0.5), respectively. If you extract features using RPN regions, you might want to change `MODEL.CLIP.OFFLINE_RPN_NMS_THRESH` as needed.
</details>
## Extract Concept Features
Along with the region feature extraction, we also provide scripts for extracting concept features from our pre-trained RegionCLIP. Given a list of concepts, our scripts extract textual embeddings and save them as local files. The following is an example using pretrained RegionCLIP. We extend the scripts from region feature extraction (section above) with minor changes.
<details>
<summary>
Before running scripts, please prepare pretrained models and your custom concepts.
</summary>
- Check [`MODEL_ZOO.md`](docs/MODEL_ZOO.md) to
- download the pretrained RegionCLIP checkpoint `regionclip_pretrained-cc_rn50.pth` to the folder `./pretrained_ckpt/regionclip`.
- Put all concepts in the file `concepts.txt` with each line as a concept name. Place this file in a folder which can be specified in the script (check `INPUT_DIR` below).
</details>
<details>
<summary>
After preparation, run the following script to extract region features.
</summary>
The following script extracts features from ResNet50.
```
# RN50 concept embeddings
python3 ./tools/extract_concept_features.py \
--config-file ./configs/LVISv1-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50.pth \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
INPUT_DIR ./datasets/custom_concepts \
OUTPUT_DIR ./output/concept_feats \
MODEL.CLIP.GET_CONCEPT_EMB True \
```
And for ResNet50x4, use the following command:
```
# RN50x4 concept embeddings
python3 ./tools/extract_concept_features.py \
--config-file ./configs/LVISv1-InstanceSegmentation/CLIP_fast_rcnn_R_50_C4_zsinf.yaml \
MODEL.WEIGHTS ./pretrained_ckpt/regionclip/regionclip_pretrained-cc_rn50x4.pth \
MODEL.CLIP.TEXT_EMB_DIM 640 \
MODEL.RESNETS.DEPTH 200 \
MODEL.CLIP.OFFLINE_RPN_CONFIG ./configs/LVISv1-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml \
INPUT_DIR ./datasets/custom_concepts \
OUTPUT_DIR ./output/concept_feats \
MODEL.CLIP.GET_CONCEPT_EMB True \
```
The language embeddings of all concepts will be saved into a `.pth` file in the folder `OUTPUT_DIR`. These language embeddings have not been normalized yet, for the consistency with concept embeddings provided in [`MODEL_ZOO.md`](docs/MODEL_ZOO.md).
The following is a list of key arguments for feature extraction. You can specify them in the script as needed.
- `INPUT_DIR` and `OUTPUT_DIR`: specify a folder of input concepts and an output folder where region features will be saved, respectively.
</details>
## Citation and Acknowledgement
### Citation
If you find this repo useful, please consider citing our paper:
```
@inproceedings{zhong2022regionclip,
title={Regionclip: Region-based language-image pretraining},
author={Zhong, Yiwu and Yang, Jianwei and Zhang, Pengchuan and Li, Chunyuan and Codella, Noel and Li, Liunian Harold and Zhou, Luowei and Dai, Xiyang and Yuan, Lu and Li, Yin and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16793--16803},
year={2022}
}
```
### Acknowledgement
This repository was built on top of [Detectron2](https://github.com/facebookresearch/detectron2), [CLIP](https://github.com/openai/CLIP), [OVR-CNN](https://github.com/alirezazareian/ovr-cnn), and [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark). We thank the effort from our community.
## Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [[email protected]](mailto:[email protected]) with any additional questions or comments.
|
bfriederich/distilbert-base-uncased-news-trained
|
bfriederich
| 2023-07-11T06:37:27Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-07T20:04:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-news-trained
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ag_news
type: ag_news
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9194736842105263
- name: F1
type: f1
value: 0.9195099897221968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-news-trained
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2420
- Accuracy: 0.9195
- F1: 0.9195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.346 | 1.0 | 469 | 0.2511 | 0.9142 | 0.9142 |
| 0.1874 | 2.0 | 938 | 0.2420 | 0.9195 | 0.9195 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hongrui/mammogram_v_2_2_1
|
hongrui
| 2023-07-11T06:21:13Z | 6 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-10T22:48:47Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - hongrui/mammogram_v_2_2_1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the hongrui/mammogram_v_1 dataset. You can find some example images in the following.




|
NasimB/gpt2-dp-guten-rarity-all-5k-2p5k
|
NasimB
| 2023-07-11T06:03:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T04:08:49Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-dp-guten-rarity-all-5k-2p5k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-dp-guten-rarity-all-5k-2p5k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3172
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.6951 | 0.28 | 500 | 5.6610 |
| 5.3498 | 0.55 | 1000 | 5.2276 |
| 5.0115 | 0.83 | 1500 | 4.9818 |
| 4.7688 | 1.1 | 2000 | 4.8256 |
| 4.5712 | 1.38 | 2500 | 4.7126 |
| 4.4784 | 1.65 | 3000 | 4.6078 |
| 4.3906 | 1.93 | 3500 | 4.5226 |
| 4.1804 | 2.21 | 4000 | 4.4857 |
| 4.1213 | 2.48 | 4500 | 4.4278 |
| 4.0805 | 2.76 | 5000 | 4.3689 |
| 4.0172 | 3.03 | 5500 | 4.3318 |
| 3.7877 | 3.31 | 6000 | 4.3246 |
| 3.7896 | 3.58 | 6500 | 4.2902 |
| 3.7714 | 3.86 | 7000 | 4.2610 |
| 3.628 | 4.13 | 7500 | 4.2685 |
| 3.4948 | 4.41 | 8000 | 4.2600 |
| 3.4897 | 4.69 | 8500 | 4.2447 |
| 3.4837 | 4.96 | 9000 | 4.2332 |
| 3.327 | 5.24 | 9500 | 4.2460 |
| 3.2974 | 5.51 | 10000 | 4.2442 |
| 3.296 | 5.79 | 10500 | 4.2437 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
digiplay/alstroemeriaMix_v1
|
digiplay
| 2023-07-11T06:03:00Z | 275 | 4 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-10T18:43:02Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info :
https://civitai.com/models/38606/alstroemeria-mix
*** ***pls apply VAE code for preventing colab error.*** ***
```
modelid="digiplay/alstroemeriaMix_v1"
from diffusers.models import AutoencoderKL
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")
pipe = DiffusionPipeline.from_pretrained(modelid, vae=vae)
```
Sample images I Made:
This model can generate virable detailed,pretty good.
Scale = 3

Scale = 7

Original Author's DEMO image :

|
joon09/kor-naver-ner-name-v2
|
joon09
| 2023-07-11T05:59:40Z | 146 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-05-24T03:58:09Z |
---
tags:
- generated_from_trainer
model-index:
- name: kor-naver-ner-name-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kor-naver-ner-name-v2
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.0+cu117
- Datasets 2.2.2
- Tokenizers 0.13.3
|
saillab/Nepali_Alpaca_ChatGPT_13B
|
saillab
| 2023-07-11T05:44:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-06T21:36:45Z |
# Nepali-Chat-GPT 13B 🇳🇵
This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT.
As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**.
## Model
**🇳🇵Nepali Alpaca Chat-GPT-13B**: We finetuned the TheBloke/stable-vicuna-13B-GPTQ model with the Nepali-Alapaca-GPT4 dataset.
**🇳🇵13B Model Generated Output**
```
### Instruction: कृपया मेसिन लर्निङको व्याख्या गर्नुहोस्।
### Output: मेसिन लर्निङ भनेको डाटाबाट सिक्ने कम्प्युटर-आधारित अनुप्रयोगहरू जस्तै मेसिन लर्निङ संरचनाहरू द्वारा प्रशिक्षण गरिएको डेटा प्रयोग गरी सञ्चालन गर्ने विशेष प्रकारको कम्प्युटर प्रणाली हो। यस प्रकारको कम्प्युटर प्रणालीले स्थानान्तरण र प्रशिक्षण गर्न डेटा प्रयोग गर्छ, र यसको कार्यसम्पादनलाई अनुमानित रूपमा सुधार गर्न सक्छ। मेसिन लर्निङ प्रणालीहरूले उनीहरूको लिङ्क र अनुप्रयोगहरू सिक्ने र प्रशिक्षण गर्ने डेटामा सुधार गर्न सक्छन्, सामान्यतया व्यक्तिगत प्रविधिहरू जस्तै चिकनहरू पहिचान गर्ने र इन्स्ट्रुक्टरहरू व्यवहार गर्ने समावेश गर्दछ। मेसिन लर्निङ प्रणालीहरू विभिन्न उद्योगहरूमा फाइदा पुर्याउन सक्छन्, जस्तै स्वास्थ्य सेवा, उद्योग, र वित्त, र यसलाई धेरै प्रयोगकर्ता अनुभवहरू र स्रोतहरू प्रयोग गरेर विकास गरिएको छ।
### Instruction: प्राकृतिक स्रोतहरू र संरक्षण गर्न प्रविधिहरू प्रवर्द्धन गर्नुहोस्।`
### Output: यसमा ऊर्जा परिणाम बढाउने टेक्नोलोजी, सौर्य ईन्धन र फोहोर उपकरणहरू, र नवीकरणीय आविष्कार र प्रविधिहरू समावेश छन्। रोजगार र विकासमा अन्तर्राष्ट्रिय व्यवस्था र अन्तर्राष्ट्रिय घटना रणनीतिहरू लागू गर्नुहोस्। २. पारिस्थितिक र फसलहरूको लागि बाहिर निकाल्नुहोस्। जनसंख्या, उत्सर्जन, र कार्बन उत्सर्जन कम गर्ने सहयोग र अन्तर्राष्ट्रिय साझेदारीहरूको माध्यमबाट सम्बोधन गर्नुहोस्। 3. मानव संसाधनहरूमा दक्षता प्रवर्द्धन गर्नुहोस्। समुद्री र जलीय पारिस्थितिहरूको सुरक्षा, संकुचन, र स्रोतहरू रोक्न मानव व्यवहार र प्रविधिहरूको प्रयोग सुधार गर्नुहोस्। 4. नवीकरणीय प्रविधि र उपकरणहरूको अनुप्रयोग बढाउनुहोस्। सार्वजनिक यातायात, नवीकरणीय ऊर्जा स्रोतहरू, र नवीकरणीय इन्धन स्रोतहरू समावेश छन्।`
```
### How to contribute?
We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66
|
saillab/Nepali_Alpaca_ChatGPT_7B
|
saillab
| 2023-07-11T05:37:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-07-10T04:29:09Z |
# Nepali-Chat-GPT 7B 🇳🇵
This repo consist of the demo and the necessary guidelines for creating the open source Nepali Alpaca ChatGPT.
As per our knowledge this is the first open-source large language model created specifically for **Nepali Language**.
## Model
**🇳🇵Nepali Alpaca Chat-GPT-7B**: We have finetuned the timdettmers/guanaco-7b model with the Nepali-Alpaca-GPT4 dataset.
HuggingFace Link: https://huggingface.co/saillab/Nepali_Alpaca_ChatGPT_7B
**🇳🇵7B Model Generated Output**
```
### Instruction:आर्टिफिशियल इन्टेलिजेन्सको व्याख्या गर्नुहोस्।
### Output: आर्टिफिसियल इन्टेलिजेन्स, प्रायः एआई भनेर चिनिन्छ, कम्प्युटर प्रणालीहरूको विकास हो जसले सामान्यतया मानव बुद्धिको आवश्यकता पर्ने कार्यहरू गर्न सक्छ। यसले कम्प्युटरहरूलाई स्पष्ट रूपमा प्रोग्राम नगरी सिक्न र सिक्न सक्ने क्षमता प्रदान गर्दछ, जसले तिनीहरूलाई स्वचालित रूपमा समस्याहरू समाधान गर्न र निर्णय गर्न सक्षम बनाउँछ। यसले कम्प्युटर प्रणालीहरूलाई मानिसले जस्तै सोच्न र निर्णय गर्न सक्षम बनाउँछ। यसले स्वास्थ्य सेवा उद्योगमा क्रान्तिकारी परिवर्तन गर्ने क्षमता प्रदान गर्दछ
### Instruction: विश्वका दुर्लभ भाषाहरूको संरक्षणको महत्त्व तीन वाक्यमा व्याख्या गर्नुहोस्।
### Output: विश्वका दुर्लभ भाषाहरूको संरक्षण धेरै कारणहरूको लागि महत्त्वपूर्ण छ। पहिलो, यसले संस्कृतिको विविधतालाई बढावा दिन्छ र विश्वभरका मानिसहरूलाई विभिन्न भाषाहरू सिक्न अनुमति दिन्छ। दोस्रो, संरक्षित भाषाहरूको संरक्षणले विगतका भाषाहरूलाई जोगाउँछ जुन अझै पनि प्रायः प्रयोग हुन्छन्। तेस्रो, संरक्षित भाषाहरूको संरक्षणले विभिन्न संस्कृतिका सांस्कृतिक सामग्रीहरूको संरक्षण र संरक्षण गर्दछ, जसले संस्कृतिको विविधतालाई बढावा दिन्छ।
```
### How to contribute?
We highly encourage you to contribute to this open-source project. Please feel free to fill this Google Form: https://forms.gle/CRN1KJsjNpNsfSR66
|
gfx-labs/xlm-roberta-base-finetuned-panx-hindi
|
gfx-labs
| 2023-07-11T05:27:20Z | 136 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-07-11T05:02:51Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-hi
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.hi
split: validation
args: PAN-X.hi
metrics:
- name: F1
type: f1
value: 0.875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-hi
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2334
- F1: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6369 | 1.0 | 188 | 0.2775 | 0.8157 |
| 0.2751 | 2.0 | 376 | 0.2537 | 0.8402 |
| 0.1737 | 3.0 | 564 | 0.2359 | 0.8606 |
| 0.1188 | 4.0 | 752 | 0.2334 | 0.875 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
mrovejaxd/ABL_c
|
mrovejaxd
| 2023-07-11T05:25:22Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T02:51:54Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: ABL_c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ABL_c
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7496
- Accuracy: 0.7192
- F1: 0.7184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nomsgadded/pokemon-lora
|
nomsgadded
| 2023-07-11T05:25:03Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-07-11T03:46:05Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - nomsgadded/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




|
dashan1992/dsl3
|
dashan1992
| 2023-07-11T05:24:21Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-07-11T05:24:19Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
JennnDexter/dreambooth
|
JennnDexter
| 2023-07-11T05:17:33Z | 29 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-15T07:49:24Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - JennnDexter/dreambooth
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
retroai818/ppo-LunarLander-v2
|
retroai818
| 2023-07-11T04:08:19Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T00:27:35Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 255.62 +/- 26.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MariTonini/distilbert-base-uncased-finetuned-cola
|
MariTonini
| 2023-07-11T04:05:15Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-07-11T02:35:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariTonini/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariTonini/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1736
- Validation Loss: 0.5611
- Train Matthews Correlation: 0.5378
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5112 | 0.4612 | 0.4628 | 0 |
| 0.3074 | 0.4744 | 0.5186 | 1 |
| 0.1736 | 0.5611 | 0.5378 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
sharpbai/Baichuan-7B
|
sharpbai
| 2023-07-11T03:45:09Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:1910.07467",
"arxiv:2009.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-11T03:40:57Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-7B
*The weight file is split into chunks with a size of 408MB for convenient and fast parallel downloads*
A 408MB split weight version of [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B)
The original model card is down below
-----------------------------------------
# Baichuan-7B
<!-- Provide a quick summary of what the model is/does. -->
Baichuan-7B是由百川智能开发的一个开源的大规模预训练模型。基于Transformer结构,在大约1.2万亿tokens上训练的70亿参数模型,支持中英双语,上下文窗口长度为4096。在标准的中文和英文权威benchmark(C-EVAL/MMLU)上均取得同尺寸最好的效果。
如果希望使用Baichuan-7B(如进行推理、Finetune等),我们推荐使用配套代码库[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
Baichuan-7B is an open-source large-scale pre-trained model developed by Baichuan Intelligent Technology. Based on the Transformer architecture, it is a model with 7 billion parameters trained on approximately 1.2 trillion tokens. It supports both Chinese and English, with a context window length of 4096. It achieves the best performance of its size on standard Chinese and English authoritative benchmarks (C-EVAL/MMLU).
If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend using the accompanying code library [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Why use Baichuan-7B
- 在同尺寸模型中Baichuan-7B达到了目前SOTA的水平,参考下面MMLU指标
- Baichuan-7B使用自有的中英文双语语料进行训练,在中文上进行优化,在C-Eval达到SOTA水平
- 不同于LLaMA完全禁止商业使用,Baichuan-7B使用更宽松的开源协议,允许用于商业目的
- Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
- Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
- Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
## How to Get Started with the Model
如下是一个使用Baichuan-7B进行1-shot推理的任务,根据作品给出作者名,正确输出为"夜雨寄北->李商隐"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
The following is a task of performing 1-shot inference using Baichuan-7B, where the author's name is given based on the work, with the correct output being "One Hundred Years of Solitude->Gabriel Garcia Marquez"
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-7B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan-7B", device_map="auto", trust_remote_code=True)
inputs = tokenizer('Hamlet->Shakespeare\nOne Hundred Years of Solitude->', return_tensors='pt')
inputs = inputs.to('cuda:0')
pred = model.generate(**inputs, max_new_tokens=64,repetition_penalty=1.1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
```
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: [email protected]
- **Language(s) (NLP):** Chinese/English
- **License:** [Baichuan-7B License](https://huggingface.co/baichuan-inc/Baichuan-7B/blob/main/baichuan-7B%20%E6%A8%A1%E5%9E%8B%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)
### Model Sources
<!-- Provide the basic links for the model. -->
整体模型基于标准的Transformer结构,我们采用了和LLaMA一样的模型设计
- **Position Embedding**:采用rotary-embedding,是现阶段被大多数模型采用的位置编码方案,具有很好的外推性。
- **Feedforward Layer**:采用SwiGLU,Feedforward变化为(8/3)倍的隐含层大小,即11008。
- **Layer Normalization**: 基于[RMSNorm](https://arxiv.org/abs/1910.07467)的Pre-Normalization。
具体参数和见下表
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
The overall model is based on the standard Transformer structure, and we have adopted the same model design as LLaMA:
- Position Embedding: We use rotary-embedding, which is the position encoding scheme adopted by most models at this stage, and it has excellent extrapolation capabilities.
- Feedforward Layer: We use SwiGLU. The feedforward changes to (8/3) times the size of the hidden layer, that is, 11008.
- Layer Normalization: Pre-Normalization based on [RMSNorm](https://arxiv.org/abs/1910.07467).
The specific parameters are as follows:
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 7000559616 |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 64000 |
| sequence length | 4096 |
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
我们同时开源出了和本模型配套的训练代码,允许进行高效的Finetune用于下游任务,具体参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
We have also open-sourced the training code that accompanies this model, allowing for efficient finetuning for downstream tasks. For more details, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
在没有充分评估风险和采取缓解措施的情况下投入生产使用;任何可能被视为不负责任或有害的使用案例。
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Baichuan-7B可能会产生事实上不正确的输出,不应依赖它产生事实上准确的信息。Baichuan-7B是在各种公共数据集上进行训练的。尽管我们已经做出了巨大的努力来清洗预训练数据,但这个模型可能会生成淫秽、偏见或其他冒犯性的输出。
Baichuan-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information. Baichuan-7B was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## Training Details
训练具体设置参见[Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B)。
For specific training settings, please refer to [Baichuan-7B](https://github.com/baichuan-inc/Baichuan-7B).
## Evaluation
### 中文评测
#### C-Eval
[CEval数据集](https://cevalbenchmark.com/index.html)是一个全面的中文基础模型评测数据集,涵盖了52个学科和四个难度的级别。我们使用该数据集的dev集作为few-shot的来源,在test集上进行了5-shot测试。
| Model 5-shot | Average | Avg(Hard) | STEM | Social Sciences | Humanities | Others |
|-----------------------------|---------|-----------|------|-----------------|------------|--------|
| GPT-4 | 68.7 | 54.9 | 67.1 | 77.6 | 64.5 | 67.8 |
| ChatGPT | 54.4 | 41.4 | 52.9 | 61.8 | 50.9 | 53.6 |
| Claude-v1.3 | 54.2 | 39.0 | 51.9 | 61.7 | 52.1 | 53.7 |
| Claude-instant-v1.0 | 45.9 | 35.5 | 43.1 | 53.8 | 44.2 | 45.4 |
| moss-moon-003-base (16B) | 27.4 | 24.5 | 27.0 | 29.1 | 27.2 | 26.9 |
| Ziya-LLaMA-13B-pretrain | 30.2 | 22.7 | 27.7 | 34.4 | 32.0 | 28.9 |
| LLaMA-7B-hf | 27.1 | 25.9 | 27.1 | 26.8 | 27.9 | 26.3 |
| ChatGLM-6B | 34.5 | 23.1 | 30.4 | 39.6 | 37.4 | 34.5 |
| Falcon-7B | 25.8 | 24.3 | 25.8 | 26.0 | 25.8 | 25.6 |
| Open-LLaMA-v2-pretrain (7B) | 24.0 | 22.5 | 23.1 | 25.3 | 25.2 | 23.2 |
| TigerBot-7B-base | 25.7 | 27.0 | 27.3 | 24.7 | 23.4 | 26.1 |
| Aquila-7B<sup>*</sup> | 25.5 | 25.2 | 25.6 | 24.6 | 25.2 | 26.6 |
| BLOOM-7B | 22.8 | 20.2 | 21.8 | 23.3 | 23.9 | 23.3 |
| BLOOMZ-7B | 35.7 | 25.8 | 31.3 | 43.5 | 36.6 | 35.6 |
| **Baichuan-7B** | 42.8 | 31.5 | 38.2 | 52.0 | 46.2 | 39.3 |
#### Gaokao
[Gaokao](https://github.com/ExpressAI/AI-Gaokao) 是一个以中国高考题作为评测大语言模型能力的数据集,用以评估模型的语言能力和逻辑推理能力。
我们只保留了其中的单项选择题,并对所有模型进行统一5-shot测试。
以下是测试的结果。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 21.41 |
| Ziya-LLaMA-13B-pretrain | 23.17 |
| Falcon-7B | 23.98 |
| TigerBot-7B-base | 25.94 |
| LLaMA-7B | 27.81 |
| ChatGLM-6B | 21.41 |
| BLOOM-7B | 26.96 |
| BLOOMZ-7B | 28.72 |
| Aquila-7B<sup>*</sup> | 24.39 |
| **Baichuan-7B** | **36.24** |
#### AGIEval
[AGIEval](https://github.com/microsoft/AGIEval) 旨在评估模型的认知和解决问题相关的任务中的一般能力。
我们只保留了其中的四选一单项选择题,随机划分后对所有模型进行了统一5-shot测试。
| Model | Average |
|-------------------------|-----------------|
| Open-LLaMA-v2-pretrain | 23.49 |
| Ziya-LLaMA-13B-pretrain | 27.64 |
| Falcon-7B | 27.18 |
| TigerBot-7B-base | 25.19 |
| LLaMA-7B | 28.17 |
| ChatGLM-6B | 23.49 |
| BLOOM-7B | 26.55 |
| BLOOMZ-7B | 30.27 |
| Aquila-7B<sup>*</sup> | 25.58 |
| **Baichuan-7B** | **34.44** |
<sup>*</sup>其中Aquila模型来源于[智源官方网站](https://model.baai.ac.cn/model-detail/100098),仅做参考
### English Leaderboard
In addition to Chinese, we also tested the model's performance in English.
#### MMLU
[MMLU](https://arxiv.org/abs/2009.03300) is an English evaluation dataset that includes 57 multiple-choice tasks, covering elementary mathematics, American history, computer science, law, etc. The difficulty ranges from high school level to expert level, making it a mainstream LLM evaluation dataset.
We adopted the [open-source]((https://github.com/hendrycks/test)) evaluation scheme, and the final 5-shot results are as follows:
| Model | Humanities | Social Sciences | STEM | Other | Average |
|----------------------------------------|-----------:|:---------------:|:----:|:-----:|:-------:|
| LLaMA-7B<sup>2</sup> | 34.0 | 38.3 | 30.5 | 38.1 | 35.1 |
| Falcon-7B<sup>1</sup> | - | - | - | - | 35.0 |
| mpt-7B<sup>1</sup> | - | - | - | - | 35.6 |
| ChatGLM-6B<sup>0</sup> | 35.4 | 41.0 | 31.3 | 40.5 | 36.9 |
| BLOOM 7B<sup>0</sup> | 25.0 | 24.4 | 26.5 | 26.4 | 25.5 |
| BLOOMZ 7B<sup>0</sup> | 31.3 | 42.1 | 34.4 | 39.0 | 36.1 |
| moss-moon-003-base (16B)<sup>0</sup> | 24.2 | 22.8 | 22.4 | 24.4 | 23.6 |
| moss-moon-003-sft (16B)<sup>0</sup> | 30.5 | 33.8 | 29.3 | 34.4 | 31.9 |
| **Baichuan-7B<sup>0</sup>** | 38.4 | 48.9 | 35.6 | 48.1 | 42.3 |
The superscript in the Model column indicates the source of the results.
```
0:reimplemented
1:https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
2:https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu
```
## Our Group
[WeChat](https://github.com/baichuan-inc/Baichuan-7B/blob/main/media/wechat.jpeg?raw=true)
|
alsonlai/dqn-SpaceInvadersNoFrameskip-v4
|
alsonlai
| 2023-07-11T03:37:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T03:37:27Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 499.50 +/- 146.57
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga alsonlai -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga alsonlai
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
zwtharry/Taxiv3
|
zwtharry
| 2023-07-11T03:29:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T03:29:08Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxiv3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zwtharry/Taxiv3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Chickenfish/Daytechillout
|
Chickenfish
| 2023-07-11T03:27:47Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-11T03:26:57Z |
---
license: creativeml-openrail-m
---
|
zwtharry/q-FrozenLake-v1-4x4-noSlippery
|
zwtharry
| 2023-07-11T03:26:12Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-07-11T03:26:08Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zwtharry/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sharpbai/Baichuan-13B-Chat
|
sharpbai
| 2023-07-11T03:23:03Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2104.09864",
"arxiv:2108.12409",
"arxiv:2009.03300",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-07-11T02:54:41Z |
---
language:
- zh
- en
pipeline_tag: text-generation
inference: false
---
# Baichuan-13B-Chat
*The weight file is split into chunks with a size of 650MB for convenient and fast parallel downloads*
A 650MB split weight version of [baichuan-inc/Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)
The original model card is down below
-----------------------------------------
# Baichuan-13B-Chat
<!-- Provide a quick summary of what the model is/does. -->
## 介绍
Baichuan-13B-Chat为Baichuan-13B系列模型中对齐后的版本,预训练模型可见[Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)。
[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) 是由百川智能继 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 之后开发的包含 130 亿参数的开源可商用的大规模语言模型,在权威的中文和英文 benchmark 上均取得同尺寸最好的效果。本次发布包含有预训练 ([Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base)) 和对齐 ([Baichuan-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan-13B-Chat)) 两个版本。Baichuan-13B 有如下几个特点:
1. **更大尺寸、更多数据**:Baichuan-13B 在 [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B) 的基础上进一步扩大参数量到 130 亿,并且在高质量的语料上训练了 1.4 万亿 tokens,超过 LLaMA-13B 40%,是当前开源 13B 尺寸下训练数据量最多的模型。支持中英双语,使用 ALiBi 位置编码,上下文窗口长度为 4096。
2. **同时开源预训练和对齐模型**:预训练模型是适用开发者的“基座”,而广大普通用户对有对话功能的对齐模型具有更强的需求。因此本次开源我们同时发布了对齐模型(Baichuan-13B-Chat),具有很强的对话能力,开箱即用,几行代码即可简单的部署。
3. **更高效的推理**:为了支持更广大用户的使用,我们本次同时开源了 int8 和 int4 的量化版本,相对非量化版本在几乎没有效果损失的情况下大大降低了部署的机器资源门槛,可以部署在如 Nvidia 3090 这样的消费级显卡上。
4. **开源免费可商用**:Baichuan-13B 不仅对学术研究完全开放,开发者也仅需邮件申请并获得官方商用许可后,即可以免费商用。
Baichuan-13B-Chat is the aligned version in the Baichuan-13B series of models, and the pre-trained model can be found at [Baichuan-13B-Base](https://huggingface.co/baichuan-inc/Baichuan-13B-Base).
[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B) is an open-source, commercially usable large-scale language model developed by Baichuan Intelligence, following [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B). With 13 billion parameters, it achieves the best performance in standard Chinese and English benchmarks among models of its size. This release includes two versions: pre-training (Baichuan-13B-Base) and alignment (Baichuan-13B-Chat). Baichuan-13B has the following features:
1. **Larger size, more data**: Baichuan-13B further expands the parameter volume to 13 billion based on [Baichuan-7B](https://github.com/baichuan-inc/baichuan-7B), and has trained 1.4 trillion tokens on high-quality corpora, exceeding LLaMA-13B by 40%. It is currently the model with the most training data in the open-source 13B size. It supports both Chinese and English, uses ALiBi position encoding, and has a context window length of 4096.
2. **Open-source pre-training and alignment models simultaneously**: The pre-training model is a "base" suitable for developers, while the general public has a stronger demand for alignment models with dialogue capabilities. Therefore, in this open-source release, we also released the alignment model (Baichuan-13B-Chat), which has strong dialogue capabilities and is ready to use. It can be easily deployed with just a few lines of code.
3. **More efficient inference**: To support a wider range of users, we have open-sourced the INT8 and INT4 quantized versions. The model can be conveniently deployed on consumer GPUs like the Nvidia 3090 with almost no performance loss.
4. **Open-source, free, and commercially usable**: Baichuan-13B is not only fully open to academic research, but developers can also use it for free commercially after applying for and receiving official commercial permission via email.
## 使用方式
如下是一个使用Baichuan-13B-Chat进行对话的示例,正确输出为"乔戈里峰。世界第二高峰———乔戈里峰西方登山者称其为k2峰,海拔高度是8611米,位于喀喇昆仑山脉的中巴边境上"
```python
import torch
from transformers import AutoModel, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True)
model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat")
messages = []
messages.append({"role": "user", "content": "世界上第二高的山峰是哪座"})
response = model.chat(tokenizer, messages)
print(response)
```
Here is an example of a conversation using Baichuan-13B-Chat, the correct output is "K2. The world's second highest peak - K2, also known as Mount Godwin-Austen or Chhogori, with an altitude of 8611 meters, is located on the China-Pakistan border in the Karakoram Range."
```python
import torch
from transformers import AutoModel, AutoTokenizer
from transformers.generation.utils import GenerationConfig
tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan-13B-Chat", use_fast=False, trust_remote_code=True)
model = AutoModel.from_pretrained("baichuan-inc/Baichuan-13B-Chat", device_map="auto", torch_dtype=torch.float16, trust_remote_code=True)
model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan-13B-Chat")
messages = []
messages.append({"role": "user", "content": "Which moutain is the second highest one in the world?"})
response = model.chat(tokenizer, messages)
print(response)
```
## 模型详情
### 模型描述
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** 百川智能(Baichuan Intelligent Technology)
- **Email**: [email protected]
- **Language(s) (NLP):** Chinese/English
- **License:** 【Community License for Baichuan-13B Model】([ZH](Baichuan-13B%20%E6%A8%A1%E5%9E%8B%E5%95%86%E7%94%A8%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)|
[EN](Community%20License%20for%20Baichuan-13B%20Model.pdf))
**商业用途(For commercial use):** 请通过上述Email联系申请书面授权。(Contact us via Email above to apply for written authorization.)
### 模型结构
<!-- Provide the basic links for the model. -->
整体模型基于Baichuan-7B,为了获得更好的推理性能,Baichuan-13B 使用了 ALiBi 线性偏置技术,相对于 Rotary Embedding 计算量更小,对推理性能有显著提升;与标准的 LLaMA-13B 相比,生成 2000 个 tokens 的平均推理速度 (tokens/s),实测提升 31.6%:
| Model | tokens/s |
|-------------|----------|
| LLaMA-13B | 19.4 |
| Baichuan-13B| 25.4 |
具体参数和见下表
| 模型名称 | 隐含层维度 | 层数 | 头数 |词表大小 | 总参数量 | 训练数据(tokens) | 位置编码 | 最大长度 |
|-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
| Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
| Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
The overall model is based on Baichuan-7B. In order to achieve better inference performance, Baichuan-13B uses ALiBi linear bias technology, which has a smaller computational load compared to Rotary Embedding, and significantly improves inference performance. Compared with the standard LLaMA-13B, the average inference speed (tokens/s) for generating 2000 tokens has been tested to increase by 31.6%:
| Model | tokens/s |
|-------------|----------|
| LLaMA-13B | 19.4 |
| Baichuan-13B| 25.4 |
The specific parameters are as follows:
| Model Name | Hidden Size | Num Layers | Num Attention Heads |Vocab Size | Total Params | Training Dats(tokens) | Position Embedding | Max Length |
|-------------------------|-------|------------|------------|-----------------|--------|--------|----------------|---------|
| Baichuan-7B | 4,096 | 32 | 32 | 64,000 | 7,000,559,616 | 1.2万亿 | [RoPE](https://arxiv.org/abs/2104.09864) | 4,096 |
| Baichuan-13B | 5,120 | 40 | 40 | 64,000 | 13,264,901,120 | 1.4万亿 | [ALiBi](https://arxiv.org/abs/2108.12409) | 4,096
## 使用须知
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### 免责声明
我们在此声明,我们的开发团队并未基于 Baichuan-13B 模型开发任何应用,无论是在 iOS、Android、网页或任何其他平台。我们强烈呼吁所有使用者,不要利用 Baichuan-13B 模型进行任何危害国家社会安全或违法的活动。另外,我们也要求使用者不要将 Baichuan-13B 模型用于未经适当安全审查和备案的互联网服务。我们希望所有的使用者都能遵守这个原则,确保科技的发展能在规范和合法的环境下进行。
我们已经尽我们所能,来确保模型训练过程中使用的数据的合规性。然而,尽管我们已经做出了巨大的努力,但由于模型和数据的复杂性,仍有可能存在一些无法预见的问题。因此,如果由于使用 Baichuan-13B 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
We hereby declare that our development team has not developed any applications based on the Baichuan-13B model, whether on iOS, Android, the web, or any other platform. We strongly urge all users not to use the Baichuan-13B model for any activities that harm national social security or are illegal. In addition, we also ask users not to use the Baichuan-13B model for internet services that have not undergone appropriate security review and filing. We hope that all users will adhere to this principle to ensure that technological development takes place in a regulated and legal environment.
We have done our utmost to ensure the compliance of the data used in the model training process. However, despite our great efforts, due to the complexity of the model and data, there may still be some unforeseen issues. Therefore, we will not take any responsibility for any issues arising from the use of the Baichuan-13B open-source model, including but not limited to data security issues, public opinion risks, or any risks and problems arising from the model being misled, misused, disseminated, or improperly exploited.
## 训练详情
训练具体设置参见[Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B)。
For specific training settings, please refer to [Baichuan-13B](https://github.com/baichuan-inc/Baichuan-13B).
## 测评结果
## [C-Eval](https://cevalbenchmark.com/index.html#home)
| Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
|-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
| Baichuan-7B | 38.2 | 52.0 | 46.2 | 39.3 | 42.8 |
| Chinese-Alpaca-Plus-13B | 35.2 | 45.6 | 40.0 | 38.2 | 38.8 |
| Chinese-LLaMA-Plus-13B | 30.3 | 38.0 | 32.9 | 29.1 | 32.1 |
| Ziya-LLaMA-13B-Pretrain | 27.6 | 34.4 | 32.0 | 28.6 | 30.0 |
| LLaMA-13B | 27.0 | 33.6 | 27.7 | 27.6 | 28.5 |
| moss-moon-003-base (16B)| 27.0 | 29.1 | 27.2 | 26.9 | 27.4 |
| vicuna-13B | 22.8 | 24.8 | 22.3 | 18.5 | 22.2 |
| **Baichuan-13B-Base** | **45.9** | **63.5** | **57.2** | **49.3** | **52.4** |
| **Baichuan-13B-Chat** | **43.7** | **64.6** | **56.2** | **49.2** | **51.5** |
## [MMLU](https://arxiv.org/abs/2009.03300)
| Model 5-shot | STEM | Social Sciences | Humanities | Others | Average |
|-------------------------|:-----:|:---------------:|:----------:|:------:|:-------:|
| LLaMA-13B | 36.1 | 53.0 | 44.0 | 52.8 | 46.3 |
| Chinese-Alpaca-Plus-13B | 36.9 | 48.9 | 40.5 | 50.5 | 43.9 |
| Ziya-LLaMA-13B-Pretrain | 35.6 | 47.6 | 40.1 | 49.4 | 42.9 |
| Baichuan-7B | 35.6 | 48.9 | 38.4 | 48.1 | 42.3 |
| Chinese-LLaMA-Plus-13B | 33.1 | 42.8 | 37.0 | 44.6 | 39.2 |
| vicuna-13B | 24.2 | 24.1 | 24.6 | 26.8 | 24.9 |
| moss-moon-003-base (16B)| 22.4 | 22.8 | 24.2 | 24.4 | 23.6 |
| **Baichuan-13B-Base** | **41.6** | **60.9** | **47.4** | **58.5** | **51.6** |
| **Baichuan-13B-Chat** | **40.9** | **60.9** | **48.8** | **59.0** | **52.1** |
> 说明:我们采用了 MMLU 官方的[评测方案](https://github.com/hendrycks/test)。
## [CMMLU](https://github.com/haonan-li/CMMLU)
| Model 5-shot | STEM | Humanities | Social Sciences | Others | China Specific | Average |
|-------------------------|:-----:|:----------:|:---------------:|:------:|:--------------:|:-------:|
| Baichuan-7B | 34.4 | 47.5 | 47.6 | 46.6 | 44.3 | 44.0 |
| Chinese-Alpaca-Plus-13B | 29.8 | 33.4 | 33.2 | 37.9 | 32.1 | 33.4 |
| Chinese-LLaMA-Plus-13B | 28.1 | 33.1 | 35.4 | 35.1 | 33.5 | 33.0 |
| Ziya-LLaMA-13B-Pretrain | 29.0 | 30.7 | 33.8 | 34.4 | 31.9 | 32.1 |
| LLaMA-13B | 29.2 | 30.8 | 31.6 | 33.0 | 30.5 | 31.2 |
| moss-moon-003-base (16B)| 27.2 | 30.4 | 28.8 | 32.6 | 28.7 | 29.6 |
| vicuna-13B | 24.0 | 25.4 | 25.3 | 25.0 | 25.0 | 24.9 |
| **Baichuan-13B-Base** | **41.7** | **61.1** | **59.8** | **59.0** | **56.4** | **55.3** |
| **Baichuan-13B-Chat** | **42.8** | **62.6** | **59.7** | **59.0** | **56.1** | **55.8** |
> 说明:CMMLU 是一个综合性的中文评估基准,专门用于评估语言模型在中文语境下的知识和推理能力。我们采用了其官方的[评测方案](https://github.com/haonan-li/CMMLU)。
## 微信群组

|
shikras/shikra-7b-delta-v1-0708
|
shikras
| 2023-07-11T03:07:55Z | 58 | 3 |
transformers
|
[
"transformers",
"pytorch",
"shikra",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-10T15:57:55Z |
Shikra-7B-v1-0708: A frequently updated ckpt for Shikra-7B-v1
---
license: cc-by-nc-4.0
datasets: added A-OKVQA dataset for Multiple Choice Question format training
|
tensorops/whisper-th-small-combined
|
tensorops
| 2023-07-11T03:05:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-07-10T10:31:06Z |
---
tags:
- generated_from_trainer
model-index:
- name: whisper-th-small-combined
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-th-small-combined
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 250
- training_steps: 1000
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.1.0a0+4136153
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.