modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
sarthakharne/Phi1_5-PreTrained-4-epoch
|
sarthakharne
| 2024-02-03T06:18:36Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:16:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Thala007Dhoni/facedeep
|
Thala007Dhoni
| 2024-02-03T06:16:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-03T05:03:32Z |
# deepfake-detection
Identify the images as real or fake using state-of-the-art AI models
|
sarthakharne/Phi1_5-PreTrained-3-epoch
|
sarthakharne
| 2024-02-03T06:14:18Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T06:11:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yoinked/merges
|
yoinked
| 2024-02-03T06:11:00Z | 0 | 7 | null |
[
"art",
"text-to-image",
"en",
"license:other",
"region:us"
] |
text-to-image
| 2023-03-26T23:51:40Z |
---
license: other
language:
- en
pipeline_tag: text-to-image
tags:
- art
---
some merges and or ggml conversions
img: booru tags, use the `/awoo/` models preferibly, as theyre the best
all non-ggml models are licensed under yodayno v2:
```
This license allows you to use the model, but only for non-commercial purposes. You cannot use the model or any part of it in a paid service or sell it.
If you use the model on any platform, you must provide a link or reference to the original model. You must give credit to the licensor whenever you use the model.
The licensor does not provide any warranty and is not liable for any damages caused by the use of the model.
If you break any of the terms, this license will be terminated.
This license is governed by the laws of the jurisdiction in which the licensor is located.
```
|
AKILESH18/lamam
|
AKILESH18
| 2024-02-03T06:04:31Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T17:04:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JahnaviKumar/FGL_DevEmotionAnalysis
|
JahnaviKumar
| 2024-02-03T06:00:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-03T05:26:38Z |
This model is trained on comments from fast-growing programming languages on GitHub. The corresponding paper has been accepted in ICPC'24, for further details on the dataset, methodology, and results, please refer https://doi.org/10.1145/3643916.3644422.
|
GGital/vit-SUPER02
|
GGital
| 2024-02-03T05:58:01Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-large-patch16-224",
"base_model:finetune:google/vit-large-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-02-02T18:15:00Z |
---
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- f1
model-index:
- name: vit-SUPER02
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-SUPER02
This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0798 | 0.16 | 50 | 0.0393 | 0.9904 |
| 0.0161 | 0.31 | 100 | 0.0176 | 0.9936 |
| 0.0017 | 0.47 | 150 | 0.0020 | 0.9984 |
| 0.0012 | 0.62 | 200 | 0.0026 | 0.9985 |
| 0.0001 | 0.78 | 250 | 0.0001 | 1.0 |
| 0.0001 | 0.93 | 300 | 0.0001 | 1.0 |
| 0.0001 | 1.09 | 350 | 0.0001 | 1.0 |
| 0.0 | 1.24 | 400 | 0.0000 | 1.0 |
| 0.0 | 1.4 | 450 | 0.0000 | 1.0 |
| 0.0 | 1.55 | 500 | 0.0000 | 1.0 |
| 0.0 | 1.71 | 550 | 0.0000 | 1.0 |
| 0.0 | 1.86 | 600 | 0.0000 | 1.0 |
| 0.0 | 2.02 | 650 | 0.0000 | 1.0 |
| 0.0 | 2.17 | 700 | 0.0000 | 1.0 |
| 0.0 | 2.33 | 750 | 0.0000 | 1.0 |
| 0.0 | 2.48 | 800 | 0.0000 | 1.0 |
| 0.0 | 2.64 | 850 | 0.0000 | 1.0 |
| 0.0 | 2.8 | 900 | 0.0000 | 1.0 |
| 0.0 | 2.95 | 950 | 0.0000 | 1.0 |
| 0.0 | 3.11 | 1000 | 0.0000 | 1.0 |
| 0.0 | 3.26 | 1050 | 0.0000 | 1.0 |
| 0.0 | 3.42 | 1100 | 0.0000 | 1.0 |
| 0.0 | 3.57 | 1150 | 0.0000 | 1.0 |
| 0.0 | 3.73 | 1200 | 0.0000 | 1.0 |
| 0.0 | 3.88 | 1250 | 0.0000 | 1.0 |
| 0.0 | 4.04 | 1300 | 0.0000 | 1.0 |
| 0.0 | 4.19 | 1350 | 0.0000 | 1.0 |
| 0.0 | 4.35 | 1400 | 0.0000 | 1.0 |
| 0.0 | 4.5 | 1450 | 0.0000 | 1.0 |
| 0.0 | 4.66 | 1500 | 0.0000 | 1.0 |
| 0.0 | 4.81 | 1550 | 0.0000 | 1.0 |
| 0.0 | 4.97 | 1600 | 0.0000 | 1.0 |
| 0.0 | 5.12 | 1650 | 0.0000 | 1.0 |
| 0.0 | 5.28 | 1700 | 0.0000 | 1.0 |
| 0.0 | 5.43 | 1750 | 0.0000 | 1.0 |
| 0.0 | 5.59 | 1800 | 0.0000 | 1.0 |
| 0.0 | 5.75 | 1850 | 0.0000 | 1.0 |
| 0.0 | 5.9 | 1900 | 0.0000 | 1.0 |
| 0.0 | 6.06 | 1950 | 0.0000 | 1.0 |
| 0.0 | 6.21 | 2000 | 0.0000 | 1.0 |
| 0.0 | 6.37 | 2050 | 0.0000 | 1.0 |
| 0.0 | 6.52 | 2100 | 0.0000 | 1.0 |
| 0.0 | 6.68 | 2150 | 0.0000 | 1.0 |
| 0.0 | 6.83 | 2200 | 0.0000 | 1.0 |
| 0.0 | 6.99 | 2250 | 0.0000 | 1.0 |
| 0.0 | 7.14 | 2300 | 0.0000 | 1.0 |
| 0.0 | 7.3 | 2350 | 0.0000 | 1.0 |
| 0.0 | 7.45 | 2400 | 0.0000 | 1.0 |
| 0.0 | 7.61 | 2450 | 0.0000 | 1.0 |
| 0.0 | 7.76 | 2500 | 0.0000 | 1.0 |
| 0.0 | 7.92 | 2550 | 0.0000 | 1.0 |
| 0.0 | 8.07 | 2600 | 0.0000 | 1.0 |
| 0.0 | 8.23 | 2650 | 0.0000 | 1.0 |
| 0.0 | 8.39 | 2700 | 0.0000 | 1.0 |
| 0.0 | 8.54 | 2750 | 0.0000 | 1.0 |
| 0.0 | 8.7 | 2800 | 0.0000 | 1.0 |
| 0.0 | 8.85 | 2850 | 0.0000 | 1.0 |
| 0.0 | 9.01 | 2900 | 0.0000 | 1.0 |
| 0.0 | 9.16 | 2950 | 0.0000 | 1.0 |
| 0.0 | 9.32 | 3000 | 0.0000 | 1.0 |
| 0.0 | 9.47 | 3050 | 0.0000 | 1.0 |
| 0.0 | 9.63 | 3100 | 0.0000 | 1.0 |
| 0.0 | 9.78 | 3150 | 0.0000 | 1.0 |
| 0.0 | 9.94 | 3200 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
morisato/TRAIN
|
morisato
| 2024-02-03T05:52:26Z | 0 | 2 | null |
[
"ja",
"license:unknown",
"region:us"
] | null | 2024-02-02T13:33:41Z |
---
license: unknown
language:
- ja
---
# 鉄道・電車LoRA
世界中の変なLoRA愛好家の皆様、お元気でお過ごしでしょうか?<br>
<br>
これは「追加学習してみる実験」で作成した鉄道・電車等のLoRAです。<br>
結論から言ってしまうと鉄道車輌や車内の学習はかなり難しく、ここにアップロードしたLoRAは期待したようなイラストを生成することがほぼ出来ません。<br>
AI画像生成の「ノイズの海の中からなんとなくぼやっとしたイメージを再現する」という仕組み上では、パースが歪んでしまったり、勝手なアレンジが加えられたりすることがよくあります。<br>
自動車・バイクや鉄道といった乗り物・機械系のイラスト生成は、各々をよく観察し細かな仕様の違いさえ分かってしまう熱心なファンとは相性が良くないジャンルかもしれません。<br>
<br>
私達が使っているSD15系モデルには、海外の鉄道車輌等の風景もある程度学習されているようです。<br>
プロンプトで"train"や"train interior"(車輌の内装:車内)と書いて画像生成すれば、車輌の外観や車内のイラストが生成できます<br>
しかし海外っぽい車輌や車内になってしまうことが多く、日本人の私達が日常で目にするような国内の車輌や車内風景を描くことはできません<br>
LoRA等の追加学習で日本の電車でお出かけする風景のイラストを生成することはできるのか?とりあえずやってみようということでチャレンジしたのが別途アップしている山手線235系・阪急3000系の車内風景でした。<br>
結果としては、車内の特徴をある程度学習はしてくれるのですが、窓やドア・シート配置といった規則性は再現できないので、かなり異次元日本の風景になってしまいました<br>
<br>
そして次に車輌の外観の学習ですが、車内と同じく、特徴をある程度は学習してくれるものの、縦や横に伸びてしまったり変形したり、あるべきパーツが無い・増殖するといった感じで、分かってる方にとってはかなり気持ち悪いイラストになってしまいます。<br>
作成したLoRAで思ったような画像が生成できるかどうかを個人的に「打率」と呼んでいますが、その「打率」が2~3割にも届かない感じです。<br>
<br>
大まかな特徴を追加で学習できているので、実際の車輌の写真をControlNet等で参照させれば若干は改善することができますが…<br>
素材を用意しなくてはならないので、結構面倒じゃないかなと思ったりします<br>
<br>
「実験してみた」ことで、出来そうな事と改善が難しそうな点がなんとなく分かってきたのが収穫かもしれません。<br>
そのような訳で最近作成してみた色々な鉄道関係のLoRAを一旦まとめてみようと思います。皆様の研究・実験のお役に立てば幸いです。<br>
<br>
<br>
## E233_1000_SD15

京浜東北線E233系の車内風景です。シートやドア配置・袖仕切り形状等がことごとく崩壊します。こちらの作例画像では天井の照明の配置や空調のルーバー等のラインデリアの再現もおかしくなっています<br>
e233, train interior, scenery, seat, window, door, poster (object) <br>
<br>
## E233ex_SD15

京浜東北線E233系の外観です。微妙に変形・崩壊した知らない子が多々生成されます<br>
e233, exterior, train, train station, railroad tracks, scenery, outdoors, day, real world location, <br>
<br>
## E235_SD15_V6

山手線E235系の車内です。シートやドア・スタンションポール・窓上サイネージモニターの配置・袖仕切り形状、ことごとく崩壊します。<br>
e235, train interior, scenery, seat, reflection, window, reflective floor, poster (object), realistic, <br>
<br>
## Hanshin5000

阪神のジェットカー5001形の外観です。やわらか5001形になって変形しがちです<br>
Hanshin5000, scenery, railroad tracks, train station, outdoors, train, real world location, power lines, <br>
<br>
## JNR205ex_SD15

埼京線205系の外観です。異世界205系が多々生成されます<br>
JNR205, train, railroad tracks, scenery, real world location, outdoors, realistic, photo background, building, power lines, headlight<br>
JNR205, train, train station, railroad tracks, scenery, real world location, outdoors, day, ceiling, ceiling light, tail lights<br>
<br>
## JNR12kei_SD15

旧国鉄12系客車の車内です。シート配置とかめちゃくちゃになりがちです<br>
12kei, aisle, train interior, scenery, window, seat, ceiling light, indoors, sunlight, reflective floor<br>
<br>
## JNR_SUHA43_SD15

旧国鉄スハ43・スハフ42等の客車の車内・外観です。シート配置がめちゃくちゃになりがちです<br>
suha43aisle, train interior, scenery, seat, window, sunlight, ceiling, ceiling light, indoors<br>
suha43, railroad tracks, train station, train, scenery, outdoors, day, tree, real world location<br>
<br>
## JNR_SUHA43W_SD15

旅の風景のイラストを作れないかとスハ43・スハフ42等の客車の通路から窓側のボックスシートを見た風景だけを学習してみたものです。<br>
モデル自体が持つ"train interior"のタグは車輌進行方向の視点が強く学習されているのでうまくいきませんでした。<br>
suha43window, train interior, scenery, seat, window, shadow, day, sunlight, door, indoors<br>
<br>
## JNR_oha35_SD15

旧国鉄オハ35客車の車内です。スハ43より若干古い時代の車輌で近代化更新があまりなされていない木製の車内風景を学習させてみたものです。シート配置がめちゃくちゃになりがちです<br>
oha35, train interior, scenery, window, indoors, sunlight, chair, ceiling, ceiling light, wooden floor<br>
<br>
## oha35_deck_SD15

旧客のデッキ付近の風景をイラストにできないかと作ってみたものです。うまくいきません。海外っぽくなったり家になったり崩壊しまくります<br>
kyukyaku, vestibule, train interior, scenery, door, indoors, ceiling light, wooden floor, train<br>
kyukyaku, scenery, train station, railroad tracks, day, outdoors, door, window, sign, sunlight, train, vestibule, outdoors<br>
kyukyaku, train interior, scenery, vestibule, building, train station, power lines, outdoors, door, window<br>
<br>
## 大阪環状線103系

大阪環状線を走っていた103系です。JR西日本の103系延命40N更新車は原型からはかなり変化していました。生成画像はそれ以上に原型からかけ離れた103系になりがちです<br>
JRE103, train, train station, railroad tracks, outdoors, real world location, photo background, 1boy, realistic, standing, scenery, headlight<br>
JRE103, train, train station, railroad tracks, multiple boys, vehicle focus, scenery, tail lights<br>
<br>
## 大阪環状線201系

大阪環状線を走っていた201系です。異世界の201系が生成されがちです<br>
JRE201, train, night, train station, scenery, outdoors, building, railroad tracks, headlight<br>
JRE201, train, train station, railroad tracks, scenery, vehicle focus, outdoors, tail lights<br>
<br>
## 大阪環状線323系

現在大阪環状線の主力となっている323系です。異世界323系が生成されがちです<br>
JRE323, train, train station, pants, multiple boys, backpack, bag, railroad tracks, multiple girls, shoes, scenery, real world location, standing, headlight<br>
JRE323, train, train station, railroad tracks, scenery, outdoors, real world location, tail lights<br>
<br>
## OsakaMetro10A

大阪市交通局(現:大阪市高速電気軌道/大阪メトロ)でかつて御堂筋線を走っていた10A系です。異世界10A系になりがちです<br>
OsakaMetro10A, subway station, train station, train, multiple boys, bag, real world location, multiple girls, railroad tracks, pants, 6+boys, black hair, rolling suitcase, holding, outdoors, tail lights<br>
OsakaMetro10A, subway station, train station, train, railroad tracks, hat, 1boy, scenery, realistic, uniform, railroad worker, outdoors, tail lights<br>
<br>
## OsakaMetro20

大阪メトロ中央線20系です。異世界20系になりがちです<br>
OsakaMetro20, subway station, train, train station, scenery, railroad tracks, ceiling, ceiling light, headlight<br>
OsakaMetro20, subway station, train, train station, multiple boys, railroad tracks, real world location, multiple girls, scenery, ceiling, ceiling light, tail lights, headlight<br>
<br>
## OsakaMetro21

大阪メトロ御堂筋線21系です。異世界21系になりがちです<br>
OsakaMetro21, subway station, train, train station, railroad tracks, scenery, real world location, outdoors, ceiling, ceiling light, headlight<br>
OsakaMetro21, subway station, train, train station, scenery, ceiling, tail lights<br>
<br>
## OsakaMetro22

大阪メトロ谷町線22系です。異世界22系になりがちです<br>
OsakaMetro22, subway station, train, train station, multiple girls, pants, bag, 1boy, railroad tracks, multiple boys, ceiling, ceiling light, headlight<br>
OsakaMetro22, subway station, train, train station, multiple boys, 6+boys, hat, real world location, scenery, shirt, night, pants, gloves, bag, holding, white shirt, uniform, railroad worker, ceiling, ceiling light, tail lights<br>
<br>
## OsakaMetro66

大阪メトロ堺筋線66系です。異世界66系になりがちです<br>
OsakaMetro66, subway station, train, scenery, train station, outdoors, ceiling, headlight<br>
OsakaMetro66, subway station, train, scenery, train station, tiles, tile floor, door, ceiling, ceiling light, tail lights<br>
<br>
## OsakaMetro70

大阪メトロ長堀鶴見緑地線70系です。異世界70系になりがちです<br>
OsakaMetro70, subway station, train, scenery, train station, night, ceiling, ceiling light, headlight<br>
OsakaMetro70, subway station, train, train station, railroad tracks, scenery, outdoors, ceiling, ceiling light, taillight<br>
<br>
## OsakaMetro80

大阪メトロ今里筋線80系です。異世界80系になりがちです<br>
OsakaMetro80, subway station, train, scenery, ceiling, ceiling light, scenery, headlight<br>
OsakaMetro80, subway station, train, scenery, door, train station, outdoors, light, ceiling, ceiling light, scenery, taillight<br>
<br>
## OsakaMetro400

大阪メトロ中央線400系です。近未来的なデザインが特徴です。異世界まで進化しがちです<br>
OsakaMetro400, subway station, train station, scenery, headlight<br>
OsakaMetro400, subway station, train station, train, scenery, taillight<br>
<br>
## OsakaMetro30000

大阪メトロ御堂筋線30000系です。よく崩壊します<br>
OsakaMetro30000, subway station, 1boy, pants, shirt, male focus, white shirt, black pants, hat, solo, from behind, black hair, night, headlight<br>
OsakaMetro30000, subway station, train station, scenery, night, railroad tracks, train, sign, door, real world location, ceiling, ceiling light, headlight<br>
OsakaMetro30000, subway station, police, hat, train station, police uniform, motor vehicle, train, scenery, taillight<br>
<br>
## TokyoMetro01_SD15

営団地下鉄(現:東京地下鉄(東京メトロ))銀座線01系です。ライト配置や前面非常ドアとか崩壊しがちです。<br>
TokyoMetro01, subway station, train station, train, 6+boys, multiple boys, blurry, real world location, depth of field, railroad tracks, bag, multiple girls, scenery, ceiling, ceiling light, headlight<br>
TokyoMetro01, subway station, train station, train, scenery, real world location, railroad tracks, multiple boys, multiple girls, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro02_SD15

営団地下鉄 丸ノ内線02系です。ライト配置や前面非常ドアとか崩壊しがちです。<br>
TokyoMetro02, subway station, train station, train, scenery, railroad tracks, real world location, realistic, night, ceiling, ceiling light, headlight<br>
TokyoMetro02, subway station, train station, train, scenery, railroad tracks, ceiling, ceiling light, taillight<br>
<br>
## TokyoMetro03_SD15

営団地下鉄 日比谷線03系です。ライト配置や前面非常ドアとか崩壊しがちです。<br>
TokyoMetro03, subway station, train station, train, scenery, railroad tracks, sign, real world location, bag, outdoors, day, ceiling, ceiling light, headlight<br>
TokyoMetro03, subway station, train station, train, multiple boys, bag, scenery, railroad tracks, skirt, 6+boys, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro05_SD15

営団地下鉄 東西線05系です。ライト配置や前面非常ドアとか崩壊しがちです。<br>
TokyoMetro05, subway station, train station, train, railroad tracks, scenery, outdoors, bench, ceiling, ceiling light, headlight<br>
TokyoMetro05, subway station, train station, train, railroad tracks, white shirt, pants, 1boy, shirt, black hair, scenery, male focus, hat, black pants, short sleeves, standing, real world location, black headwear, wide shot, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro10000_SD15

東京メトロ 有楽町線・副都心線10000系です。まるみを帯びた全面形状が特徴なんですが、そのまま大崩壊した画像が生成されがちです。<br>
TokyoMetro10000, subway station, train station, train, scenery, sign, outdoors, 1boy, jacket, pants, standing, blurry, ceiling, ceiling light, headlight<br>
TokyoMetro10000, subway station, train station, train, railroad tracks, real world location, scenery, photo background, realistic, 1boy, vehicle focus, ceiling, ceiling light, tail lights, headlight<br>
TokyoMetro10000, subway station, train station, train, scenery, railroad tracks, real world location, building, outdoors, day, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro1000_SD15

東京メトロ 銀座線1000系です。レトロ感のある落ち着いたデザインなのですが、落ち着かない画像が多々生成されます<br>
TokyoMetro1000, subway station, train station, train, scenery, sign, light, railroad tracks, ceiling, ceiling light, headlight<br>
TokyoMetro1000, subway station, train station, multiple boys, train, hat, scenery, railroad tracks, real world location, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro5000_SD15

営団地下鉄 東西線を走っていた5000系です。ライト配置や前面ドアとか崩壊しがちです。<br>
TokyoMetro5000, subway station, train station, train, scenery, railroad tracks, outdoors, real world location, ceiling, ceiling light, headlight<br>
TokyoMetro5000, subway station, train station, train, railroad tracks, black hair, 1boy, standing, 1girl, pants, shoes, wide shot, scenery, real world location, tail lights<br>
<br>
## TokyoMetro6000_SD15

営団地下鉄 千代田線を走っていた6000系です。ライト配置や前面ドアとか崩壊しがちです。<br>
TokyoMetro6000, subway station, train station, train, railroad tracks, scenery, chinese text, real world location, headlight<br>
TokyoMetro6000, subway station, train station, train, scenery, fence, outdoors, real world location, night, railroad tracks, ceiling, ceiling light, tail lights<br>
<br>
## TokyoMetro7000_SD15

営団地下鉄 有楽町線を走っていた7000系です。ライト配置や前面ドアとか崩壊しがちです。<br>
TokyoMetro7000, subway station, train station, train, scenery, railroad tracks, tiles, ceiling, ceiling light, headlight<br>
TokyoMetro7000, subway station, train station, train, railroad tracks, scenery, outdoors, real world location, day, building, tail lights<br>
<br>
## TokyoMetro8000_SD15

営団地下鉄 半蔵門線8000系です。ライト配置や前面ドアとか崩壊しがちです。<br>
TokyoMetro8000, train, railroad tracks, real world location, outdoors, scenery, building, sky, day, power lines, headlight<br>
TokyoMetro8000, subway station, train station, train, scenery, ceiling, ceiling light, headlight<br>
TokyoMetro8000, subway station, train station, train, scenery, tail lights<br>
<br>
|
mikeee/phi2_DPO
|
mikeee
| 2024-02-03T05:49:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T05:49:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
blueapple8259/TinyKo-v5-b
|
blueapple8259
| 2024-02-03T05:48:20Z | 62 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:maywell/korean_textbooks",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T05:29:46Z |
---
license: mit
datasets:
- maywell/korean_textbooks
language:
- ko
---
[TinyKo-v5-a](https://huggingface.co/blueapple8259/TinyKo-v5-a)모델에서 약간의 파인튜닝을 한 진행한 모델입니다.
주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다.
## 모델 정보
model type: llama
hidden size: 6
hidden size: 127
num attention heads: 16
num key value heads: 4
|
blueapple8259/TinyKo-v5-a
|
blueapple8259
| 2024-02-03T05:48:08Z | 2,256 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"dataset:maywell/korean_textbooks",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T05:24:22Z |
---
license: mit
datasets:
- maywell/korean_textbooks
language:
- ko
---
[korean_textbooks](https://huggingface.co/datasets/maywell/korean_textbooks)데이터셋의 tiny-textbooks를 사용하여 학습한 모델입니다.
주의: 성능이 매우 떨어지며 할루시네이션이 매우 심합니다.
## 모델 정보
model type: llama
hidden size: 6
hidden size: 127
num attention heads: 16
num key value heads: 4
|
mohdmurtuzakhan/G8_mistral7b_qlora_1211_v02
|
mohdmurtuzakhan
| 2024-02-03T05:46:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T05:46:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SilentSpeak/torchnet
|
SilentSpeak
| 2024-02-03T05:41:01Z | 0 | 0 | null |
[
"en",
"license:gpl-3.0",
"region:us"
] | null | 2023-11-22T11:01:35Z |
---
license: gpl-3.0
language:
- en
metrics:
- wer
---
# LipNet Phonemes Predictors
Project was developed on using python3.8, in a Linux Ubuntu 24.04
run `python -m pip install -r requirements.txt` to make sure your dependencies are the same as mine
the list of video files to be used for training and validation when training normal LipNet (not phonemes prediction)
are in unseen_train.txt and unseen_test.txt respectively.
the datasets are zipped in lip/*.zip, unzip them into the same location and run `python main.py` to start training
hyperparamters are found in options.py
Project Setup
1. pull this repo using `git pull https://huggingface.co/SilentSpeak/torchnet phonemes`
2. initialize a python virtualenv for this project using `python3.8 -m venv venv`
3. initialize the virtualenv using `source venv/bin/activate`
4. run `python -m pip install -r requirements.txt` to get dependencies
5. install git LFS using `git lfs install`
6. pull the GRID dataset and saved tensorboard runs using `git lfs pull`
Following the project setup, you can run training as follows:
To run training for the LipNet phonemes predictor, run `python main.py`
To run training for the LipNet phonemes to text transformer predictor, run `python TransformerTrainer.py`
To run training for the LipNet-to-BiGRU-to-text transformer predictor, run `python TranslatorTrainer.py`
To run evaluation for the lipnet phonemes predictor + phonemes-to-text transformer end-to-end pipeline,
run `cd tests && python lipnet-pipeline.py`. The model weights used in `lipnet-pipeline.py` are included in the repo as
LFS files in the `saved-weights` folder.
The LRS2 dataset was too large to include in the repo, and access to the LRS2 dataset is conditional on accepting
the non-commercial usage license. However, the config file for training on the LRS2 dataset can be found in `options_lrs2.py`
, and the preprocessing code for the LRS2 dataset can be found in `scripts/extract_crop_lips_v2.py` and `scripts/generate_lsr2_train.py`.
The LRS2 dataset itself can be be found at [https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html](https://www.robots.ox.ac.uk/~vgg/data/lip_reading/lrs2.html)
|
LoneStriker/Blue-Orchid-2x7b-AWQ
|
LoneStriker
| 2024-02-03T05:40:37Z | 30 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-02-03T05:37:35Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
Imran1/MedChat3.5
|
Imran1
| 2024-02-03T05:39:25Z | 5 | 2 |
transformers, Unsloth, Peft, trl, accelerate, bitsandbytes
|
[
"transformers, Unsloth, Peft, trl, accelerate, bitsandbytes",
"safetensors",
"mistral",
"medical",
"language model",
"NLP",
"license:mit",
"region:us"
] | null | 2024-01-17T05:55:41Z |
---
library_name: transformers, Unsloth, Peft, trl, accelerate, bitsandbytes
tags:
- medical
- language model
- NLP
license: mit
---
# Model Card for MedChat3.5
## Model Details
### Model Description
MedChat3.5 is a specialized language model based on the OpenChat 3.5 architecture, fine-tuned for biomedical natural language processing (NLP) tasks. The model has been tailored using the Llama2-MedTuned-Instructions dataset, which includes approximately 200,000 samples specifically designed for instruction-based learning in biomedical contexts. The model excels in tasks such as Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA).
- **Developed by:** Imran Ullah
- **Model type:** Language Model (LM), fine-tuned for medical NLP
- **Language(s) (NLP):** English (Biomedical Text)
- **License:** [MIT]
- **Finetuned from model [optional]:** OpenChat 3.5
## Dataset Information
### Dataset Name: Llama2-MedTuned-Instructions
#### Dataset Description
Llama2-MedTuned-Instructions is an instruction-based dataset developed for training language models in biomedical NLP tasks. Comprising approximately 200,000 samples, the dataset guides models through tasks like Named Entity Recognition (NER), Relation Extraction (RE), Medical Natural Language Inference (NLI), Document Classification, and Question Answering (QA). It consolidates subsets from well-known biomedical datasets, ensuring a diverse and comprehensive training experience.
#### Source Datasets and Composition
- Named Entity Recognition (NER): NCBI-disease, BC5CDR-disease, BC5CDR-chem, BC2GM, JNLPBA, i2b2-2012
- Relation Extraction (RE): i2b2-2010, GAD
- Natural Language Inference (NLI): MedNLI
- Document Classification: Hallmarks of cancer (HoC)
- Question Answering (QA): ChatDoctor, PMC-Llama-Instructions
#### Prompting Strategy
Each sample in the dataset follows a three-part structure: Instruction, Input, and Output, facilitating instruction-based learning.
#### Usage and Application
Ideal for training and evaluating models on biomedical NLP tasks, MedChat3.5 serves as a benchmark for assessing model performance in domain-specific tasks, comparing against established models like BioBERT and BioClinicalBERT.
## Inference Instructions
To use MedChat3.5 for inference, follow the provided code snippet using the `transformers` library. Make sure to install the necessary packages and authenticate using an Hugging Face API token. Adjust parameters like temperature, top-p, and top-k for desired generation behavior. The model is optimized for tasks such as question answering and generating responses in biomedical contexts.
```python
# Example Inference Code
!pip install -q --upgrade git+https://github.com/huggingface/transformers.git
!pip install -q accelerate datasets bitsandbytes peft
# user your own hugging face secret token
from google.colab import userdata
hf_token = userdata.get('HF_TOKEN')
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
from transformers import AutoTokenizer, SinkCache, AutoModelForCausalLM, TextStreamer
path = "Imran1/MedChat3.5"
# Load base LLM model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
token=hf_token,
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(path, token=hf_token)
tokenizer.eos_token_id = model.config.eos_token_id
tokenizer.pad_token = tokenizer.eos_token
streamer = TextStreamer(tokenizer)
tx = '''
GPT4 Correct Assistant: you are a stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
'''
import warnings
warnings.filterwarnings('ignore') # Ignore all warnings
inputs = tokenizer(tx, return_tensors="pt", return_attention_mask=False).to('cuda')
generation_params = {
'max_new_tokens': 500,
'use_cache': True,
'do_sample': True,
'temperature': 0.7,
'top_p': 0.9,
'top_k': 50
}
outputs = model.generate(**inputs, **generation_params, streamer=streamer)
decoded_outputs = tokenizer.batch_decode(outputs)
# output
'''
<s>
GPT4 Correct Assistant: you are stomach specialist.<|end_of_turn|>
GPT4 Correct User: What role does gastric acid play in the process of digestion, and how does the stomach regulate its secretion to maintain a healthy digestive environment?<|end_of_turn|>
GPT4 Correct Assistant:
Gastric acid plays a crucial role in the process of digestion by breaking down food into its basic components. It is secreted by the cells lining the stomach, known as parietal cells, in response to the presence of food in the stomach.
The stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. The primary mechanism is the release of gastrin, a hormone produced by the stomach's G-cells in response to the presence of food. Gastrin stimulates the parietal cells to secrete gastric acid, which in turn aids in the breakdown of food.
The stomach also regulates the secretion of gastric acid through the release of histamine, which is produced by the ECL cells in response to the presence of food. Histamine acts on the parietal cells to stimulate gastric acid secretion.
Another mechanism involves the production of intrinsic factor, a protein produced by the stomach's mucous cells. Intrinsic factor is essential for the absorption of vitamin B12 in the small intestine. The production of intrinsic factor is regulated by gastric acid, which helps maintain a healthy balance of this essential nutrient.
Additionally, the stomach regulates the secretion of gastric acid through the release of somatostatin, a hormone produced by the D-cells of the stomach. Somatostatin inhibits gastric acid secretion, helping to maintain a healthy balance between acid production and neutralization.
In summary, the stomach regulates the secretion of gastric acid through a series of mechanisms that maintain a healthy digestive environment. These mechanisms include the release of gastrin, histamine, and intrinsic factor, as well as the release of somatostatin. By maintaining a balance between acid production and neutralization, the stomach ensures that the digestive environment remains conducive to proper digestion and absorption of nutrients.<|end_of_turn|>
'''
```
|
thisiswooyeol/Reinforce-CartPole-v1
|
thisiswooyeol
| 2024-02-03T05:29:26Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T05:29:15Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
LoneStriker/Blue-Orchid-2x7b-8.0bpw-h8-exl2
|
LoneStriker
| 2024-02-03T05:26:08Z | 9 | 5 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T20:00:17Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
matteo1997/5_images_dreambooth_lora_step1000
|
matteo1997
| 2024-02-03T05:24:53Z | 1 | 2 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-03T04:27:23Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a green car in the forest'
output:
url:
"image_0.png"
- text: 'a green car in the forest'
output:
url:
"image_1.png"
- text: 'a green car in the forest'
output:
url:
"image_2.png"
- text: 'a green car in the forest'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a blue car
license: openrail++
---
# SDXL LoRA DreamBooth - matteo1997/5_images_dreambooth_lora_step1000
<Gallery />
## Model description
These are matteo1997/5_images_dreambooth_lora_step1000 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a blue car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matteo1997/5_images_dreambooth_lora_step1000/tree/main) them in the Files & versions tab.
|
omartariq612/quran-whisper-medium-1
|
omartariq612
| 2024-02-03T05:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T05:09:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Blue-Orchid-2x7b-3.0bpw-h6-exl2
|
LoneStriker
| 2024-02-03T05:07:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-01T19:45:12Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
kanishka/smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
|
kanishka
| 2024-02-03T05:04:37Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"dataset:kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T06:33:53Z |
---
tags:
- generated_from_trainer
datasets:
- kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
metrics:
- accuracy
model-index:
- name: smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
type: kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal
metrics:
- name: Accuracy
type: accuracy
value: 0.40997045687548256
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-autoreg-bpe-counterfactual-babylm-pipps_and_keys_to_it_all_removal-seed_211-1e-3
This model was trained from scratch on the kanishka/counterfactual-babylm-pipps_and_keys_to_it_all_removal dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4342
- Accuracy: 0.4100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 64
- seed: 211
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 32000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.5982 | 1.0 | 18594 | 3.7814 | 0.3600 |
| 3.3842 | 2.0 | 37188 | 3.5917 | 0.3792 |
| 3.2578 | 3.0 | 55782 | 3.4820 | 0.3923 |
| 3.181 | 4.0 | 74376 | 3.4444 | 0.3975 |
| 3.127 | 5.0 | 92970 | 3.4062 | 0.4023 |
| 3.0853 | 6.0 | 111564 | 3.3876 | 0.4042 |
| 3.0444 | 7.0 | 130158 | 3.3845 | 0.4051 |
| 3.0164 | 8.0 | 148752 | 3.3997 | 0.4067 |
| 2.9875 | 9.0 | 167346 | 3.3890 | 0.4077 |
| 2.9637 | 10.0 | 185940 | 3.3966 | 0.4072 |
| 2.9414 | 11.0 | 204534 | 3.3861 | 0.4084 |
| 2.9102 | 12.0 | 223128 | 3.3732 | 0.4095 |
| 2.8918 | 13.0 | 241722 | 3.3955 | 0.4091 |
| 2.8738 | 14.0 | 260316 | 3.3978 | 0.4096 |
| 2.8518 | 15.0 | 278910 | 3.3918 | 0.4102 |
| 2.8325 | 16.0 | 297504 | 3.4144 | 0.4098 |
| 2.8187 | 17.0 | 316098 | 3.4153 | 0.4102 |
| 2.7944 | 18.0 | 334692 | 3.4143 | 0.4103 |
| 2.7783 | 19.0 | 353286 | 3.4294 | 0.4100 |
| 2.7617 | 20.0 | 371880 | 3.4342 | 0.4100 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
macarious/torgo_xlsr_finetune_M05_old
|
macarious
| 2024-02-03T04:50:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T20:40:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M05
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7932
- Wer: 0.3577
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5534 | 0.99 | 1000 | 3.4455 | 1.0 |
| 2.3481 | 1.98 | 2000 | 1.8194 | 0.8971 |
| 0.9664 | 2.97 | 3000 | 1.2685 | 0.6818 |
| 0.672 | 3.96 | 4000 | 1.3412 | 0.6112 |
| 0.5432 | 4.96 | 5000 | 1.4455 | 0.5275 |
| 0.4393 | 5.95 | 6000 | 1.3948 | 0.4761 |
| 0.3761 | 6.94 | 7000 | 1.8967 | 0.4785 |
| 0.3474 | 7.93 | 8000 | 1.5481 | 0.4545 |
| 0.309 | 8.92 | 9000 | 1.7275 | 0.4354 |
| 0.284 | 9.91 | 10000 | 1.9297 | 0.4438 |
| 0.2582 | 10.9 | 11000 | 1.4894 | 0.3971 |
| 0.2426 | 11.89 | 12000 | 1.6811 | 0.3840 |
| 0.2406 | 12.88 | 13000 | 1.7411 | 0.3935 |
| 0.2281 | 13.88 | 14000 | 1.7894 | 0.3732 |
| 0.1874 | 14.87 | 15000 | 1.7728 | 0.3864 |
| 0.1918 | 15.86 | 16000 | 2.0315 | 0.3768 |
| 0.1693 | 16.85 | 17000 | 1.7024 | 0.3672 |
| 0.1551 | 17.84 | 18000 | 1.7620 | 0.3684 |
| 0.1645 | 18.83 | 19000 | 1.7186 | 0.3696 |
| 0.1527 | 19.82 | 20000 | 1.7932 | 0.3577 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.13.3
|
matteo1997/10_images_dreambooth_lora_step1000
|
matteo1997
| 2024-02-03T04:25:31Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-03T03:12:33Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a pink car driven on the expressway'
output:
url:
"image_0.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_1.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_2.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a blue car
license: openrail++
---
# SDXL LoRA DreamBooth - matteo1997/10_images_dreambooth_lora_step1000
<Gallery />
## Model description
These are matteo1997/10_images_dreambooth_lora_step1000 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a blue car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matteo1997/10_images_dreambooth_lora_step1000/tree/main) them in the Files & versions tab.
|
zhangHarry/orca_mini_3b_summary-epoch_0
|
zhangHarry
| 2024-02-03T04:21:53Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:pankajmathur/orca_mini_3b",
"base_model:adapter:pankajmathur/orca_mini_3b",
"region:us"
] | null | 2024-01-20T03:57:01Z |
---
library_name: peft
base_model: psmathur/orca_mini_3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
AdAstra1/q-Taxi-v1
|
AdAstra1
| 2024-02-03T04:01:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T04:01:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="AdAstra1/q-Taxi-v1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AdAstra1/q-FrozenLake-v1-4x4-noSlippery
|
AdAstra1
| 2024-02-03T04:00:53Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T03:45:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="AdAstra1/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jbuch808/tqc-PandaPickAndPlace-v3
|
jbuch808
| 2024-02-03T03:55:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T03:54:51Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **TQC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
CLMBR/pp-mod-subj-transformer-4
|
CLMBR
| 2024-02-03T03:44:20Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:07:55Z |
---
tags:
- generated_from_trainer
model-index:
- name: pp-mod-subj2-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pp-mod-subj2-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2297 | 0.03 | 76320 | 4.2433 |
| 4.0275 | 1.03 | 152640 | 4.0750 |
| 3.9187 | 0.03 | 228960 | 4.0013 |
| 3.8499 | 1.03 | 305280 | 3.9602 |
| 3.8009 | 0.03 | 381600 | 3.9359 |
| 3.754 | 1.03 | 457920 | 3.9211 |
| 3.7162 | 0.03 | 534240 | 3.9103 |
| 3.6839 | 1.03 | 610560 | 3.9040 |
| 3.6566 | 0.03 | 686880 | 3.9007 |
| 3.6332 | 1.03 | 763200 | 3.8988 |
| 3.6064 | 0.03 | 839520 | 3.8968 |
| 3.5872 | 1.03 | 915840 | 3.8964 |
| 3.5702 | 0.03 | 992160 | 3.8978 |
| 3.5552 | 1.03 | 1068480 | 3.8977 |
| 3.5343 | 0.03 | 1144800 | 3.9006 |
| 3.5197 | 1.03 | 1221120 | 3.9013 |
| 3.5064 | 0.03 | 1297440 | 3.9038 |
| 3.4941 | 0.03 | 1373760 | 3.9058 |
| 3.481 | 1.03 | 1450080 | 3.9078 |
| 3.4726 | 0.03 | 1526400 | 3.9097 |
| 3.4675 | 1.03 | 1602720 | 3.9105 |
| 3.4502 | 0.03 | 1679040 | 3.9132 |
| 3.4381 | 1.03 | 1755360 | 3.9147 |
| 3.4265 | 0.03 | 1831680 | 3.9167 |
| 3.4144 | 1.03 | 1908000 | 3.9173 |
| 3.4049 | 0.03 | 1984320 | 3.9193 |
| 3.3904 | 0.03 | 2060640 | 3.9211 |
| 3.3792 | 1.03 | 2136960 | 3.9233 |
| 3.3687 | 0.03 | 2213280 | 3.9250 |
| 3.3597 | 1.03 | 2289600 | 3.9263 |
| 3.3466 | 0.03 | 2365920 | 3.9275 |
| 3.3407 | 1.03 | 2442240 | 3.9272 |
| 3.3293 | 0.03 | 2518560 | 3.9300 |
| 3.3238 | 0.03 | 2594880 | 3.9299 |
| 3.3127 | 1.03 | 2671200 | 3.9311 |
| 3.3062 | 0.03 | 2747520 | 3.9313 |
| 3.3036 | 0.03 | 2823840 | 3.9303 |
| 3.2911 | 1.03 | 2900160 | 3.9300 |
| 3.2841 | 0.03 | 2976480 | 3.9290 |
| 3.2768 | 1.02 | 3052726 | 3.9266 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ackerley/NeuralPipe-7B-slerp
|
ackerley
| 2024-02-03T03:37:29Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T03:33:23Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "ackerley/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
acrastt/Bean-3B
|
acrastt
| 2024-02-03T03:36:26Z | 1,522 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:64bits/lima_vicuna_format",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-02T00:06:46Z |
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- 64bits/lima_vicuna_format
pipeline_tag: text-generation
model-index:
- name: Bean-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 40.36
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.0
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 36.11
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Bean-3B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [LIMA(ShareGPT format)](https://huggingface.co/datasets/64bits/lima_vicuna_format) for 2 epochs.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
GGUF quantizations available [here](https://huggingface.co/maddes8cht/acrastt-Bean-3B-gguf).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 40.18 |
| ARC (25-shot) | 40.36 |
| HellaSwag (10-shot) | 72.0 |
| MMLU (5-shot) | 26.43 |
| TruthfulQA (0-shot) | 36.11 |
| Winogrande (5-shot) | 65.67 |
| GSM8K (5-shot) | 0.53 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |40.18|
|AI2 Reasoning Challenge (25-Shot)|40.36|
|HellaSwag (10-Shot) |72.00|
|MMLU (5-Shot) |26.43|
|TruthfulQA (0-shot) |36.11|
|Winogrande (5-shot) |65.67|
|GSM8k (5-shot) | 0.53|
|
dengh/a2c-PandaReachDense-v3
|
dengh
| 2024-02-03T03:36:06Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T03:28:08Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.14
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
|
acrastt
| 2024-02-03T03:35:56Z | 1,535 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:databricks/databricks-dolly-15k",
"dataset:OpenAssistant/oasst1",
"dataset:Muennighoff/natural-instructions",
"dataset:Muennighoff/P3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-07-27T19:42:41Z |
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- togethercomputer/RedPajama-Data-1T
- databricks/databricks-dolly-15k
- OpenAssistant/oasst1
- Muennighoff/natural-instructions
- Muennighoff/P3
pipeline_tag: text-generation
model-index:
- name: RedPajama-INCITE-Chat-Instruct-3B-V1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 42.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 67.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 25.99
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 33.62
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is an experimental merge of models [RedPajama-INCITE-Chat-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) and [RedPajama-INCITE-Instruct-3B-V1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1).</br>
This model is adaptive to prompt templates, but this template is recommended:
```
HUMAN: {prompt}
ASSISTANT:
```
Feel free to change HUMAN or ASSISTANT. It will not change much.</br>
GGML versions [here](https://huggingface.co/adadbbb/pajama_ggml) (Note that this is only compatible with [koboldcpp](https://github.com/LostRuins/koboldcpp)).
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 39.23 |
| ARC (25-shot) | 42.58 |
| HellaSwag (10-shot) | 67.48 |
| MMLU (5-shot) | 25.99 |
| TruthfulQA (0-shot) | 33.62 |
| Winogrande (5-shot) | 64.8 |
| GSM8K (5-shot) | 0.91 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__RedPajama-INCITE-Chat-Instruct-3B-V1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |39.23|
|AI2 Reasoning Challenge (25-Shot)|42.58|
|HellaSwag (10-Shot) |67.48|
|MMLU (5-Shot) |25.99|
|TruthfulQA (0-shot) |33.62|
|Winogrande (5-shot) |64.80|
|GSM8k (5-shot) | 0.91|
|
acrastt/Marx-3B
|
acrastt
| 2024-02-03T03:34:32Z | 2,261 | 13 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:totally-not-an-llm/everything-sharegptformat-morecleaned",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-15T18:23:34Z |
---
language:
- en
license: apache-2.0
datasets:
- totally-not-an-llm/everything-sharegptformat-morecleaned
pipeline_tag: text-generation
model-index:
- name: Marx-3B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 43.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 72.68
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 28.46
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.59
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.29
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=acrastt/Marx-3B
name: Open LLM Leaderboard
---
<a href="https://www.buymeacoffee.com/acrastt" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data(ShareGPT format more cleaned)](https://huggingface.co/datasets/totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs.
Prompt template:
```
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
```
GGML quants available [here](https://huggingface.co/TheBloke/Marx-3b-GGML).</br>
GPTQ quants available [here](https://huggingface.co/TheBloke/Marx-3b-GPTQ).
Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 41.71 |
| ARC (25-shot) | 43.17 |
| HellaSwag (10-shot) | 72.68 |
| MMLU (5-shot) | 28.46 |
| TruthfulQA (0-shot) | 39.09 |
| Winogrande (5-shot) | 65.59 |
| GSM8K (5-shot) | 1.29 |
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |41.71|
|AI2 Reasoning Challenge (25-Shot)|43.17|
|HellaSwag (10-Shot) |72.68|
|MMLU (5-Shot) |28.46|
|TruthfulQA (0-shot) |39.09|
|Winogrande (5-shot) |65.59|
|GSM8k (5-shot) | 1.29|
|
Verias/convo-devia
|
Verias
| 2024-02-03T03:27:22Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:cdla-permissive-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T03:25:18Z |
---
license: cdla-permissive-2.0
---
|
Verias/DialoGPT-small-devia
|
Verias
| 2024-02-03T03:18:56Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:cdla-permissive-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T23:37:14Z |
---
license: cdla-permissive-2.0
---
|
ND911/Franken-Maid-Slerp
|
ND911
| 2024-02-03T03:09:19Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE",
"ND911/EE-LMaid-7B-Slerp",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T03:02:48Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
- ND911/EE-LMaid-7B-Slerp
---

Experimental RP merges - using SillyTavern with Min-P
SanjiWatsuki/Loyal-Macaroni-Maid-7B, merged with ND911/EE-Maid-7B-Slerp which is a merge of SanjiWatsuki/Silicon-Maid-7B and maywell/Synatra-7B-v0.3-RP
EE-LMaid-7B-Slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B)
* [ND911/EE-Maid-7B-Slerp](https://huggingface.co/ND911/EE-Maid-7B-Slerp)
# Franken-Maid-Slerp
Franken-Maid-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE](https://huggingface.co/SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE)
* [ND911/EE-LMaid-7B-Slerp](https://huggingface.co/ND911/EE-LMaid-7B-Slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
layer_range: [0, 32]
- model: ND911/EE-LMaid-7B-Slerp
layer_range: [0, 32]
merge_method: slerp
base_model: ND911/EE-LMaid-7B-Slerp
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Jimmyhd/llama2TimeBook
|
Jimmyhd
| 2024-02-03T02:58:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T00:23:11Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
rizla/rizla-17
|
rizla
| 2024-02-03T02:55:19Z | 235 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"dpo",
"merge",
"mergekit",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T21:23:06Z |
---
license: cc-by-nc-nd-4.0
base_model:
- mistralai/Mixtral-8x7B-Instruct-v0.1
tags:
- dpo
- merge
- mergekit
---
# rizla been cooking while singing
# This is an experimental model that I made by merging two 2expmixtrals The mergekitty is a tool that lets me mix and match different models into one big model, keeping all the smarts and skills of the original models. The llama70b is a huge language model that can make words for all kinds of things and ways, based on the GPT-4 thingy.
The merged model has 17 billion parraraameters and was made to run on 8gb of ram minimum in q3KL gguf.
## Merge me baby one more time
### Sending this contraption out straight to mergeland, wwhheeeeeeeeeeeee LFG 🚀
|
matteo1997/lora-trained-xl
|
matteo1997
| 2024-02-03T02:51:03Z | 2 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-30T06:23:49Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'a pink car driven on the expressway'
output:
url:
"image_0.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_1.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_2.png"
- text: 'a pink car driven on the expressway'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a blue car
license: openrail++
---
# SDXL LoRA DreamBooth - matteo1997/lora-trained-xl
<Gallery />
## Model description
These are matteo1997/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a blue car to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matteo1997/lora-trained-xl/tree/main) them in the Files & versions tab.
|
OEvortex/HelpingAI-Lite-GGUF
|
OEvortex
| 2024-02-03T02:48:17Z | 67 | 2 |
transformers
|
[
"transformers",
"gguf",
"HelpingAI",
"lite",
"code",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-01-17T13:23:01Z |
---
library_name: transformers
language:
- en
license: mit
tags:
- HelpingAI
- lite
- code
pipeline_tag: text-generation
---
#### Description
Optimize your engagement with [This project](https://huggingface.co/OEvortex/HelpingAI-Lite) by seamlessly integrating GGUF Format model files.
Please Subscribe to my youtube channel [OEvortex](https://youtube.com/@OEvortex)
### GGUF Technical Specifications
Delve into the intricacies of GGUF, a meticulously crafted format that builds upon the robust foundation of the GGJT model. Tailored for heightened extensibility and user-centric functionality, GGUF introduces a suite of indispensable features:
**Single-file Deployment:** Streamline distribution and loading effortlessly. GGUF models have been meticulously architected for seamless deployment, necessitating no external files for supplementary information.
**Extensibility:** Safeguard the future of your models. GGUF seamlessly accommodates the integration of new features into GGML-based executors, ensuring compatibility with existing models.
**mmap Compatibility:** Prioritize efficiency. GGUF models are purposefully engineered to support mmap, facilitating rapid loading and saving, thus optimizing your workflow.
**User-Friendly:** Simplify your coding endeavors. Load and save models effortlessly, irrespective of the programming language used, obviating the dependency on external libraries.
**Full Information:** A comprehensive repository in a single file. GGUF models encapsulate all requisite information for loading, eliminating the need for users to furnish additional data.
The differentiator between GGJT and GGUF lies in the deliberate adoption of a key-value structure for hyperparameters (now termed metadata). Bid farewell to untyped lists, and embrace a structured approach that seamlessly accommodates new metadata without compromising compatibility with existing models. Augment your model with supplementary information for enhanced inference and model identification.
**QUANTIZATION_METHODS:**
| Method | Quantization | Advantages | Trade-offs |
|---|---|---|---|
| q2_k | 2-bit integers | Significant model size reduction | Minimal impact on accuracy |
| q3_k_l | 3-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q3_k_m | 3-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q3_k_s | 3-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q4_0 | 4-bit integers | Significant model size reduction | Moderate impact on accuracy |
| q4_1 | 4-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q4_k_m | 4-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
| q4_k_s | 4-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q5_0 | 5-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q5_1 | 5-bit integers | Enhanced accuracy with mixed precision | Increased computational complexity |
| q5_k_m | 5-bit integers | Optimized model size and accuracy with mixed precision and structured pruning | Reduced accuracy |
| q5_k_s | 5-bit integers | Improved model efficiency with structured pruning | Reduced accuracy |
| q6_k | 6-bit integers | Balance between model size reduction and accuracy preservation | Moderate impact on accuracy |
| q8_0 | 8-bit integers | Significant model size reduction | Minimal impact on accuracy |
|
jbuch808/sac-PandaPickAndPlace-v3
|
jbuch808
| 2024-02-03T02:47:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-03T02:46:07Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **SAC** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
omusico/NeuralPipe-7B-slerp
|
omusico
| 2024-02-03T02:44:58Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:25:34Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "omusico/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
fong33/NeuralPipe-7B-slerp
|
fong33
| 2024-02-03T02:39:12Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:35:22Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "fong33/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
bart-automation/sft_zephyr
|
bart-automation
| 2024-02-03T02:34:38Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-alpha",
"base_model:adapter:HuggingFaceH4/zephyr-7b-alpha",
"license:mit",
"region:us"
] | null | 2024-02-03T02:34:23Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-alpha
model-index:
- name: sft_zephyr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft_zephyr
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
frankc350/NeuralPipe-7B-slerp
|
frankc350
| 2024-02-03T02:28:03Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:23:45Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "frankc350/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
dictatee/NeuralPipe-7B-slerp
|
dictatee
| 2024-02-03T02:24:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:20:25Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "dictatee/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
XinHun/YD_JQS
|
XinHun
| 2024-02-03T02:22:24Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2024-02-03T02:20:35Z |
---
license: other
license_name: '1'
license_link: LICENSE
---
|
weimenglin/NeuralPipe-7B-slerp
|
weimenglin
| 2024-02-03T02:13:29Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-03T02:09:14Z |
---
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "weimenglin/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Kquant03/Mistral-7B-Instruct-v0.2-Neural-Story-GGUF
|
Kquant03
| 2024-02-03T01:33:14Z | 34 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:NeuralNovel/Neural-Story-v1",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2024-02-01T18:30:55Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- NeuralNovel/Neural-Story-v1
library_name: transformers
inference: false
language:
- en
---

# NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story
[BASE MODEL HERE](https://huggingface.co/NeuralNovel/Mistral-7B-Instruct-v0.2-Neural-Story)
The **Mistral-7B-Instruct-v0.2-Neural-Story** model, developed by NeuralNovel and funded by Techmind, is a language model finetuned from Mistral-7B-Instruct-v0.2.
Designed to generate instructive and narrative text, with a specific focus on storytelling.
This fine-tune has been tailored to provide detailed and creative responses in the context of narrative and optimised for short story telling.
Based on mistralAI, with apache-2.0 license, suitable for commercial or non-commercial use.
### Data-set
The model was finetuned using the Neural-Story-v1 dataset.
### Benchmark
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **64.96** |
| ARC | 64.08 |
| HellaSwag | **66.89** |
| MMLU | 60.67 |
| TruthfulQA | 66.89 |
| Winogrande | **75.85** |
| GSM8K | 38.29 |
Evaluated on **HuggingFaceH4/open_llm_leaderboard**
### Summary
Fine-tuned with the intention of generating creative and narrative text, making it more suitable for creative writing prompts and storytelling.
#### Out-of-Scope Use
The model may not perform well in scenarios unrelated to instructive and narrative text generation. Misuse or applications outside its designed scope may result in suboptimal outcomes.
### Bias, Risks, and Limitations
The model may exhibit biases or limitations inherent in the training data. It is essential to consider these factors when deploying the model to avoid unintended consequences.
While the Neural-Story-v0.1 dataset serves as an excellent starting point for testing language models, users are advised to exercise caution, as there might be some inherent genre or writing bias.
### Hardware and Training
```
n_epochs = 3,
n_checkpoints = 3,
batch_size = 12,
learning_rate = 1e-5,
```
*Sincere appreciation to Techmind for their generous sponsorship.*
|
r3m3c3/english-to-kanji-c46000_model_3_v_0
|
r3m3c3
| 2024-02-03T01:32:01Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T01:30:42Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dipudl/codeT5-DistilBERT-operator-precedence-bug-model
|
dipudl
| 2024-02-03T01:27:56Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T18:48:40Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
model-index:
- name: codeT5-DistilBERT-operator-precedence-bug-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeT5-DistilBERT-operator-precedence-bug-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1423
- Accuracy: 0.9446
- Precision: 0.9369
- Recall: 0.9731
- F1 score: 0.9547
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
r3m3c3/english-to-kanji-c42000_model_3_v_0
|
r3m3c3
| 2024-02-03T01:21:00Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T01:19:56Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
r3m3c3/english-to-kanji-c29000_model_3_v_0
|
r3m3c3
| 2024-02-03T01:12:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T01:10:52Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
karawalla/aqmodel_20240203
|
karawalla
| 2024-02-03T01:02:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-03T01:02:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
r3m3c3/english-to-kanji-c20000_model_3_v_0
|
r3m3c3
| 2024-02-03T01:01:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T01:00:15Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
r3m3c3/english-to-kanji-c18000_model_3_v_0
|
r3m3c3
| 2024-02-03T00:58:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-03T00:57:19Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vinluvie/clip-vit-large-patch14-finetuned
|
vinluvie
| 2024-02-03T00:48:20Z | 71 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-02-02T20:15:06Z |
---
base_model: openai/clip-vit-large-patch14
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: clip-vit-large-patch14-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-finetuned
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
daila/wav2vec2-large-xls-r-300m-vi-colab
|
daila
| 2024-02-03T00:21:35Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_1",
"base_model:daila/wav2vec2-large-xls-r-300m-vi-colab",
"base_model:finetune:daila/wav2vec2-large-xls-r-300m-vi-colab",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T10:20:44Z |
---
base_model: daila/wav2vec2-large-xls-r-300m-vi-colab
tags:
- generated_from_trainer
datasets:
- common_voice_16_1
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-vi-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_1
type: common_voice_16_1
config: vi
split: test
args: vi
metrics:
- name: Wer
type: wer
value: 0.5894672631150875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-vi-colab
This model is a fine-tuned version of [daila/wav2vec2-large-xls-r-300m-vi-colab](https://huggingface.co/daila/wav2vec2-large-xls-r-300m-vi-colab) on the common_voice_16_1 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6432
- Wer: 0.5895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0916 | 4.52 | 400 | 1.5440 | 0.6357 |
| 0.1344 | 9.04 | 800 | 1.6043 | 0.6543 |
| 0.0926 | 13.56 | 1200 | 1.7226 | 0.6365 |
| 0.0703 | 18.08 | 1600 | 1.5989 | 0.6048 |
| 0.0557 | 22.6 | 2000 | 1.6714 | 0.6001 |
| 0.051 | 27.12 | 2400 | 1.6432 | 0.5895 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
microsoft/falcon-7B-onnx
|
microsoft
| 2024-02-02T23:43:40Z | 0 | 0 | null |
[
"onnx",
"falcon-7b",
"falcon",
"onnxruntime",
"llm",
"en",
"base_model:tiiuae/falcon-7b",
"base_model:quantized:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
] | null | 2023-11-14T20:40:36Z |
---
license: apache-2.0
base_model: tiiuae/falcon-7b
language:
- en
tags:
- falcon-7b
- falcon
- onnxruntime
- onnx
- llm
---
#### This is an optimized version of the Falcon 7B model, available on this repository: https://huggingface.co/tiiuae/falcon-7b and under the license on such repository. Microsoft permits you to use, modify, redistribute and create derivatives of Microsoft's contributions to the optimized version subject to the restrictions and disclaimers of warranty and liability in license agreement.
# falcon-7b for ONNX Runtime
## Introduction
This repository hosts the optimized version of **falcon-7b** to accelerate inference with ONNX Runtime CUDA execution provider.
See the [usage instructions](#usage-example) for how to inference this model with the ONNX files hosted in this repository.
## Model Description
- **Developed by:** TIIUAE
- **Model type:** Pretrained generative text model
- **License:** Apache 2.0 License
- **Model Description:** This is a conversion of the [falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
## Performance Comparison
#### Latency for token generation
Below is average latency of generating a token using a prompt of varying size using NVIDIA A100-SXM4-80GB GPU:
| Prompt Length | Batch Size | PyTorch 2.1 torch.compile | ONNX Runtime CUDA |
|-------------|------------|----------------|-------------------|
| 32 | 1 | 53.64ms | 15.68ms |
| 256 | 1 | 59.55ms | 26.05ms |
| 1024 | 1 | 89.82ms | 99.05ms |
| 2048 | 1 | 208.0ms | 227.0ms |
| 32 | 4 | 70.8ms | 19.62ms |
| 256 | 4 | 78.6ms | 81.29ms |
| 1024 | 4 | 373.7ms | 369.6ms |
| 2048 | 4 | N/A | 879.2ms |
## Usage Example
1. Clone onnxruntime repository.
```shell
git clone https://github.com/microsoft/onnxruntime
cd onnxruntime
```
2. Install required dependencies
```shell
python3 -m pip install -r onnxruntime/python/tools/transformers/models/llama/requirements-cuda.txt
```
5. Inference using custom model API, or use Hugging Face's ORTModelForCausalLM
```python
from optimum.onnxruntime import ORTModelForCausalLM
from onnxruntime import InferenceSession
from transformers import AutoConfig, AutoTokenizer
sess = InferenceSession("falcon-7b.onnx", providers = ["CUDAExecutionProvider"])
config = AutoConfig.from_pretrained("tiiuae/falcon-7b")
model = ORTFalconForCausalLM(sess, config, use_cache = True, use_io_binding = True)
tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-7b")
inputs = tokenizer("Instruct: What is a fermi paradox?\nOutput:", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
merge-crew/munin-neuralbeagle-7b-density-very-low
|
merge-crew
| 2024-02-02T23:34:17Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:finetune:mlabonne/NeuralBeagle14-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T23:30:19Z |
---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
base_model:
- mlabonne/NeuralBeagle14-7B
---
# munin-neuralbeagle-7b-density-very-low
munin-neuralbeagle-7b-density-very-low is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: danish-foundation-models/munin-7b-alpha
# No parameters necessary for base model
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.1
weight: 0.6
merge_method: dare_ties
base_model: danish-foundation-models/munin-7b-alpha
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "merge-crew/munin-neuralbeagle-7b-density-very-low"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
nakodanei/Blue-Orchid-2x7b
|
nakodanei
| 2024-02-02T23:32:26Z | 696 | 77 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-30T18:38:47Z |
---
license: apache-2.0
---
**Blue-Orchid-2x7b**
GGUF: https://huggingface.co/nakodanei/Blue-Orchid-2x7b_GGUF
Roleplaying focused MoE Mistral model.
One expert is a merge of mostly RP models, the other is a merge of mostly storywriting models. So it should be good at both. The base model is SanjiWatsuki/Kunoichi-DPO-v2-7B.
- Expert 1 is a merge of LimaRP, Limamono, Noromaid 0.4 DPO and good-robot.
- Expert 2 is a merge of Erebus, Holodeck, Dans-AdventurousWinds-Mk2, Opus, Ashhwriter and good-robot.
## Prompt template (LimaRP):
```
### Instruction:
{system prompt}
### Input:
User: {prompt}
### Response:
Character:
```
Alpaca prompt template should work fine too.
|
merge-crew/munin-neuralbeagle-7b-density-low
|
merge-crew
| 2024-02-02T23:12:58Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"base_model:mlabonne/NeuralBeagle14-7B",
"base_model:finetune:mlabonne/NeuralBeagle14-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T23:08:58Z |
---
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
base_model:
- mlabonne/NeuralBeagle14-7B
---
# munin-neuralbeagle-7b-density-low
munin-neuralbeagle-7b-density-low is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B)
## 🧩 Configuration
```yaml
models:
- model: danish-foundation-models/munin-7b-alpha
# No parameters necessary for base model
- model: mlabonne/NeuralBeagle14-7B
parameters:
density: 0.3
weight: 0.6
merge_method: dare_ties
base_model: danish-foundation-models/munin-7b-alpha
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "merge-crew/munin-neuralbeagle-7b-density-low"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
soonyau/visconet
|
soonyau
| 2024-02-02T23:07:05Z | 0 | 2 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2024-01-03T13:45:01Z |
---
license: cc-by-nc-sa-4.0
---
|
ubaskota/my_eli5_mlm_model
|
ubaskota
| 2024-02-02T23:03:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-02-02T21:13:35Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: my_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4048
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4542 | 1.0 | 7300 | 0.4418 |
| 0.4327 | 2.0 | 14600 | 0.4121 |
| 0.4108 | 3.0 | 21900 | 0.4033 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
CLMBR/rel-cl-lstm-4
|
CLMBR
| 2024-02-02T23:01:41Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-26T11:03:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: rel-cl2-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rel-cl2-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9791
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8127 | 0.03 | 76320 | 4.7766 |
| 4.522 | 1.03 | 152640 | 4.4899 |
| 4.3812 | 0.03 | 228960 | 4.3533 |
| 4.2903 | 1.03 | 305280 | 4.2686 |
| 4.2294 | 0.03 | 381600 | 4.2113 |
| 4.1806 | 1.03 | 457920 | 4.1694 |
| 4.1393 | 0.03 | 534240 | 4.1382 |
| 4.1073 | 1.03 | 610560 | 4.1141 |
| 4.0771 | 0.03 | 686880 | 4.0934 |
| 4.0523 | 1.03 | 763200 | 4.0774 |
| 4.0323 | 0.03 | 839520 | 4.0638 |
| 4.0151 | 1.03 | 915840 | 4.0536 |
| 3.9965 | 0.03 | 992160 | 4.0444 |
| 3.9819 | 1.03 | 1068480 | 4.0361 |
| 3.9725 | 0.03 | 1144800 | 4.0289 |
| 3.9584 | 1.03 | 1221120 | 4.0227 |
| 3.9459 | 0.03 | 1297440 | 4.0175 |
| 3.9353 | 1.03 | 1373760 | 4.0132 |
| 3.931 | 0.03 | 1450080 | 4.0095 |
| 3.9245 | 1.03 | 1526400 | 4.0062 |
| 3.9211 | 0.03 | 1602720 | 4.0030 |
| 3.9169 | 1.03 | 1679040 | 4.0008 |
| 3.9098 | 0.03 | 1755360 | 3.9978 |
| 3.9016 | 1.03 | 1831680 | 3.9956 |
| 3.8961 | 0.03 | 1908000 | 3.9935 |
| 3.8877 | 1.03 | 1984320 | 3.9921 |
| 3.8819 | 0.03 | 2060640 | 3.9904 |
| 3.8791 | 0.03 | 2136960 | 3.9887 |
| 3.8725 | 1.03 | 2213280 | 3.9875 |
| 3.8674 | 0.03 | 2289600 | 3.9857 |
| 3.8651 | 0.03 | 2365920 | 3.9848 |
| 3.8611 | 1.03 | 2442240 | 3.9841 |
| 3.8565 | 0.03 | 2518560 | 3.9834 |
| 3.8527 | 1.03 | 2594880 | 3.9825 |
| 3.8532 | 0.03 | 2671200 | 3.9817 |
| 3.8515 | 1.03 | 2747520 | 3.9812 |
| 3.8542 | 0.03 | 2823840 | 3.9804 |
| 3.8536 | 1.03 | 2900160 | 3.9801 |
| 3.8506 | 0.03 | 2976480 | 3.9795 |
| 3.8498 | 1.02 | 3052726 | 3.9791 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mikr/w2v-bert-2.0-czech-colab-cv16
|
mikr
| 2024-02-02T22:31:08Z | 22 | 2 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_16_0",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-02-02T17:11:32Z |
---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_16_0
metrics:
- wer
model-index:
- name: w2v-bert-2.0-czech-colab-cv16
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_16_0
type: common_voice_16_0
config: cs
split: test
args: cs
metrics:
- name: Wer
type: wer
value: 0.05733702722973076
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-czech-colab-cv16
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1023
- Wer: 0.0573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.5297 | 0.66 | 300 | 0.1448 | 0.1299 |
| 0.0886 | 1.32 | 600 | 0.1353 | 0.1051 |
| 0.0717 | 1.98 | 900 | 0.1157 | 0.0861 |
| 0.0463 | 2.64 | 1200 | 0.0994 | 0.0759 |
| 0.0404 | 3.3 | 1500 | 0.1054 | 0.0724 |
| 0.0314 | 3.96 | 1800 | 0.0915 | 0.0694 |
| 0.0227 | 4.63 | 2100 | 0.0926 | 0.0664 |
| 0.0205 | 5.29 | 2400 | 0.0992 | 0.0652 |
| 0.0161 | 5.95 | 2700 | 0.0932 | 0.0654 |
| 0.0124 | 6.61 | 3000 | 0.0902 | 0.0629 |
| 0.0097 | 7.27 | 3300 | 0.0970 | 0.0612 |
| 0.0081 | 7.93 | 3600 | 0.0946 | 0.0602 |
| 0.0054 | 8.59 | 3900 | 0.0962 | 0.0588 |
| 0.0048 | 9.25 | 4200 | 0.1029 | 0.0579 |
| 0.0034 | 9.91 | 4500 | 0.1023 | 0.0573 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
|
nakodanei/Blue-Orchid-2x7b_GGUF
|
nakodanei
| 2024-02-02T22:30:15Z | 3,254 | 17 | null |
[
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T15:24:36Z |
---
license: apache-2.0
---
GGUF version of: https://huggingface.co/nakodanei/Blue-Orchid-2x7b
|
Katelie/PixelcopterEnv
|
Katelie
| 2024-02-02T22:27:01Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T18:39:05Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PixelcopterEnv
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 39.40 +/- 30.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
xugefu/bloom-7b1-lora-tagger
|
xugefu
| 2024-02-02T22:25:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-02-02T22:25:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rijgersberg/RobBERT-2023-offensiveness
|
Rijgersberg
| 2024-02-02T22:15:29Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"nl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T21:42:14Z |
---
language:
- nl
widget:
- text: "Vul hier een zin in om te classificeren"
example_title: "Voorbeelden"
- text: "Vroeger werd het woord \"neger\" te pas en te onpas gebruikt"
example_title: "Voorbeeld 1"
- text: "Hij speelde het spel \"geen jager, geen neger\""
example_title: "Voorbeeld 2"
- text: "Ik ga je knuffelen"
example_title: "Voorbeeld 3"
- text: "Ik ga je vermoorden, lul"
example_title: "Voorbeeld 4"
---
|
sbulut/finetuned-kde4-en-to-tr
|
sbulut
| 2024-02-02T21:57:41Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2024-02-02T19:53:18Z |
---
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-tr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-tr
split: train
args: en-tr
metrics:
- name: Bleu
type: bleu
value: 29.832961482999476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-tr-en) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0990
- Bleu: 29.8330
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Taller3g1/poyecto_grupal
|
Taller3g1
| 2024-02-02T21:52:42Z | 3 | 0 |
keras
|
[
"keras",
"tf-keras",
"clip",
"region:us"
] | null | 2024-01-29T05:36:12Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
cvlab/pix2gestalt-weights
|
cvlab
| 2024-02-02T21:47:42Z | 0 | 5 | null |
[
"arxiv:2401.14398",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-26T16:24:59Z |
---
license: cc-by-nc-4.0
---
# pix2gestalt Model Weights
[Code](https://github.com/cvlab-columbia/pix2gestalt), [Website](https://gestalt.cs.columbia.edu/), [arXiv](https://arxiv.org/abs/2401.14398)
[pix2gestalt: Amodal Segmentation by Synthesizing Wholes](https://gestalt.cs.columbia.edu/)
[Ege Ozguroglu](https://egeozguroglu.github.io/)<sup>1</sup>, [Ruoshi Liu](https://ruoshiliu.github.io/)<sup>1</sup>, [Dídac Surís](https://www.didacsuris.com/)<sup>1</sup>, [Dian Chen](https://scholar.google.com/citations?user=zdAyna8AAAAJ&hl=en)<sup>2</sup>, [Achal Dave](https://www.achaldave.com/)<sup>2</sup>, [Pavel Tokmakov](https://pvtokmakov.github.io/home/)<sup>2</sup>, [Carl Vondrick](https://www.cs.columbia.edu/~vondrick/)<sup>1</sup> <br>
<sup>1</sup>Columbia University, <sup>2</sup>Toyota Research Institute
<div align="left">
<a href="https://gestalt.cs.columbia.edu/"><img height="80%" alt="pix2gestalt" src="https://gestalt.cs.columbia.edu/static/images/teaser/%20pix2gestalt_teaser.jpg"></a>
</div>
<b>pix2gestalt</b> synthesizes whole objects from only partially visible ones, enabling amodal segmentation, recognition, and 3D reconstruction of occluded objects.
## Citation
```
@misc{ozguroglu2024pix2gestalt,
title={pix2gestalt: Amodal Segmentation by Synthesizing Wholes},
author={Ege Ozguroglu and Ruoshi Liu and Dídac Surís and Dian Chen and Achal Dave and Pavel Tokmakov and Carl Vondrick},
year={2024},
eprint={2401.14398},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Acknowledgement
This research is based on work partially supported by the Toyota Research Institute, the DARPA MCS program under Federal Agreement No. N660011924032, the NSF NRI Award \#1925157, and the NSF AI Institute for Artificial and Natural Intelligence Award \#2229929. DS is supported by the Microsoft PhD Fellowship.
|
jbuch808/a2c-PandaReachDense-v3
|
jbuch808
| 2024-02-02T21:46:41Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T21:41:59Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.23 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dsteiner93/Reinforce-cartpole1
|
dsteiner93
| 2024-02-02T21:45:20Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T21:45:10Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-cartpole1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
jlbaker361/dcgan-lazy-wikiart1000-resized
|
jlbaker361
| 2024-02-02T21:37:57Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T14:17:05Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced1000
n classes 27
batch_size 32
images where resized to 768
and then center cropped to: 512
used clip=False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
CLMBR/full-transformer-4
|
CLMBR
| 2024-02-02T21:36:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-26T10:07:15Z |
---
tags:
- generated_from_trainer
model-index:
- name: full2-transformer-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-transformer-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2223 | 0.03 | 76320 | 4.1935 |
| 4.0184 | 1.03 | 152640 | 4.0257 |
| 3.9091 | 0.03 | 228960 | 3.9515 |
| 3.845 | 1.03 | 305280 | 3.9101 |
| 3.7943 | 0.03 | 381600 | 3.8851 |
| 3.7537 | 0.03 | 457920 | 3.8688 |
| 3.7243 | 1.03 | 534240 | 3.8585 |
| 3.6946 | 0.03 | 610560 | 3.8522 |
| 3.6634 | 1.03 | 686880 | 3.8472 |
| 3.6406 | 0.03 | 763200 | 3.8446 |
| 3.6184 | 1.03 | 839520 | 3.8431 |
| 3.5959 | 0.03 | 915840 | 3.8432 |
| 3.5817 | 1.03 | 992160 | 3.8423 |
| 3.5621 | 0.03 | 1068480 | 3.8429 |
| 3.5438 | 1.03 | 1144800 | 3.8439 |
| 3.5273 | 0.03 | 1221120 | 3.8440 |
| 3.5096 | 1.03 | 1297440 | 3.8458 |
| 3.4966 | 0.03 | 1373760 | 3.8464 |
| 3.4822 | 1.03 | 1450080 | 3.8478 |
| 3.4746 | 0.03 | 1526400 | 3.8491 |
| 3.4649 | 1.03 | 1602720 | 3.8508 |
| 3.4573 | 0.03 | 1679040 | 3.8530 |
| 3.4517 | 1.03 | 1755360 | 3.8537 |
| 3.4416 | 0.03 | 1831680 | 3.8544 |
| 3.4297 | 1.03 | 1908000 | 3.8557 |
| 3.4193 | 0.03 | 1984320 | 3.8570 |
| 3.4087 | 1.03 | 2060640 | 3.8579 |
| 3.3961 | 0.03 | 2136960 | 3.8595 |
| 3.3885 | 1.03 | 2213280 | 3.8609 |
| 3.3768 | 0.03 | 2289600 | 3.8616 |
| 3.3645 | 1.03 | 2365920 | 3.8617 |
| 3.3515 | 0.03 | 2442240 | 3.8626 |
| 3.337 | 0.03 | 2518560 | 3.8631 |
| 3.3292 | 0.03 | 2594880 | 3.8627 |
| 3.3153 | 1.03 | 2671200 | 3.8646 |
| 3.3131 | 0.03 | 2747520 | 3.8646 |
| 3.3088 | 0.03 | 2823840 | 3.8638 |
| 3.3024 | 1.03 | 2900160 | 3.8636 |
| 3.3024 | 0.03 | 2976480 | 3.8629 |
| 3.2966 | 0.02 | 3052726 | 3.8620 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
KreigerNadir/LavLora
|
KreigerNadir
| 2024-02-02T21:35:58Z | 1 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-02-02T21:29:23Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
((blood, dismemberment, disgust)), girl, (the pentagram), curved demonic
horns, gothic dress, (red tone, fire in the background), slate atmosphere,
cinematic, dimmed colors, dark shot, muted colors, film grainy, lut, spooky
<lora:Lav_Lune-Harriet_Cains-000001:1>
<lora:Lav_Lune-Harriet_Cains-000002:1>
parameters:
negative_prompt: >-
(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong
anatomy, extra limb, missing limb, floating limbs, (mutated hands and
fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting,
blurry, amputation, lots of navels, lots of ears
output:
url: images/00008-4004957634.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# First Test Lora
<Gallery />
## Model description

## Download model
Weights for this model are available in Safetensors format.
[Download](/KreigerNadir/LavLora/tree/main) them in the Files & versions tab.
|
pleasefill/mesolo
|
pleasefill
| 2024-02-02T21:34:40Z | 0 | 0 |
mlx
|
[
"mlx",
"music",
"robotics",
"an",
"dataset:HuggingFaceM4/WebSight",
"license:bigscience-bloom-rail-1.0",
"region:us"
] |
robotics
| 2024-02-02T21:32:11Z |
---
license: bigscience-bloom-rail-1.0
datasets:
- HuggingFaceM4/WebSight
language:
- an
metrics:
- character
library_name: mlx
pipeline_tag: robotics
tags:
- music
---
|
janhq/stealth-rag-v1.1-GGUF
|
janhq
| 2024-02-02T20:31:28Z | 0 | 0 | null |
[
"gguf",
"alignment-handbook",
"generated_from_trainer",
"trl",
"sft",
"dataset:jan-hq/bagel_sft_binarized",
"dataset:jan-hq/dolphin_binarized",
"dataset:jan-hq/openhermes_binarized",
"base_model:jan-hq/stealth-rag-v1.1",
"base_model:quantized:jan-hq/stealth-rag-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-02T20:17:12Z |
---
license: apache-2.0
base_model: jan-hq/stealth-rag-v1.1
tags:
- alignment-handbook
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- jan-hq/bagel_sft_binarized
- jan-hq/dolphin_binarized
- jan-hq/openhermes_binarized
model-index:
- name: LlamaCorn-sft-adapter
results: []
model_creator: jan-hq
model_name: stealth-rag-v1.1
quantized_by: JanHQ
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<p align="center">
<a href="https://jan.ai/">Jan</a>
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
</p>
<!-- header end -->
# Model Description
This is a GGUF version of [jan-hq/stealth-rag-v1.1](https://huggingface.co/jan-hq/stealth-rag-v1.1)
- Model creator: [jan-hq](https://huggingface.co/jan-hq)
- Original model: [stealth-rag-v1.1](https://huggingface.co/jan-hq/stealth-rag-v1.1)
- Model description: [Readme](https://huggingface.co/jan-hq/stealth-rag-v1.1/blob/main/README.md)
# About Jan
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
# Jan Model Converter
This is a repository for the [open-source converter](https://github.com/janhq/model-converter. We would be grateful if the community could contribute and strengthen this repository. We are aiming to expand the repo that can convert into various types of format
|
cashewEnthusiast/Taxi-v3-attempt1
|
cashewEnthusiast
| 2024-02-02T20:23:00Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-02-02T20:22:58Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-attempt1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.42 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="cashewEnthusiast/Taxi-v3-attempt1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LunaticTanuki/oop-de-qg-flan-t5-base-v6
|
LunaticTanuki
| 2024-02-02T20:20:58Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T20:20:10Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: oop-de-qg-flan-t5-base-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oop-de-qg-flan-t5-base-v6
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7965
- Rouge1: 65.2469
- Rouge2: 52.5016
- Rougel: 63.4057
- Rougelsum: 63.531
- Gen Len: 15.1903
- Bleu: 0.4231
- Precisions: [0.7077429983525535, 0.5410502958579881, 0.4610198061525495, 0.3966699314397649]
- Brevity Penalty: 0.8225
- Length Ratio: 0.8365
- Translation Length: 3035
- Reference Length: 3628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:------:|:----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
| No log | 1.0 | 233 | 0.9298 | 59.4999 | 45.9581 | 57.5532 | 57.7586 | 14.9094 | 0.3448 | [0.6374833555259654, 0.4552936775158997, 0.3672075149444919, 0.3043262058677275] | 0.8124 | 0.8280 | 3004 | 3628 |
| No log | 2.0 | 466 | 0.8616 | 58.9163 | 46.1813 | 57.3535 | 57.5989 | 14.1631 | 0.3488 | [0.6636553161917998, 0.48095798979191207, 0.39620938628158847, 0.3320954907161804] | 0.7706 | 0.7933 | 2878 | 3628 |
| 1.0446 | 3.0 | 699 | 0.8276 | 62.7765 | 50.2006 | 61.0262 | 61.1999 | 14.5952 | 0.3837 | [0.6885413124787487, 0.5160919540229885, 0.42957437472575694, 0.3613963039014374] | 0.7917 | 0.8106 | 2941 | 3628 |
| 1.0446 | 4.0 | 932 | 0.8107 | 63.8174 | 51.0448 | 61.7972 | 62.0925 | 14.9305 | 0.3969 | [0.6949949613705072, 0.5249433106575964, 0.43844492440604754, 0.3719758064516129] | 0.8036 | 0.8206 | 2977 | 3628 |
| 0.7689 | 5.0 | 1165 | 0.7966 | 64.5126 | 51.7002 | 62.5 | 62.6287 | 15.1239 | 0.4088 | [0.6974900924702774, 0.5272525027808677, 0.4437869822485207, 0.3778869778869779] | 0.8202 | 0.8346 | 3028 | 3628 |
| 0.7689 | 6.0 | 1398 | 0.7986 | 64.2531 | 50.9604 | 62.2618 | 62.4906 | 15.3263 | 0.4077 | [0.6919570172582221, 0.5182481751824818, 0.43295973432959733, 0.36766121270452357] | 0.8341 | 0.8465 | 3071 | 3628 |
| 0.6741 | 7.0 | 1631 | 0.7974 | 64.9736 | 52.3108 | 63.2436 | 63.3625 | 15.2175 | 0.4205 | [0.7034233048057933, 0.5386036202438124, 0.45707070707070707, 0.3926650366748166] | 0.8235 | 0.8374 | 3038 | 3628 |
| 0.6741 | 8.0 | 1864 | 0.7965 | 65.2469 | 52.5016 | 63.4057 | 63.531 | 15.1903 | 0.4231 | [0.7077429983525535, 0.5410502958579881, 0.4610198061525495, 0.3966699314397649] | 0.8225 | 0.8365 | 3035 | 3628 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LunaticTanuki/oop-de-qg-flan-t5-base-v5
|
LunaticTanuki
| 2024-02-02T20:14:29Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T11:16:41Z |
---
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: oop-de-qg-flan-t5-base-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# oop-de-qg-flan-t5-base-v5
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8305
- Rouge1: 60.2858
- Rouge2: 47.0551
- Rougel: 58.5541
- Rougelsum: 58.5986
- Gen Len: 14.6254
- Bleu: 0.3585
- Precisions: [0.6612685560053981, 0.4800607671857197, 0.39139878366637704, 0.3257229832572298]
- Brevity Penalty: 0.7993
- Length Ratio: 0.8170
- Translation Length: 2964
- Reference Length: 3628
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:------:|:-----------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
| No log | 0.99 | 72 | 0.9838 | 58.281 | 44.4811 | 56.6252 | 56.6047 | 14.6042 | 0.3304 | [0.6428324697754749, 0.4543681747269891, 0.367666815942678, 0.30546792849631965] | 0.7763 | 0.7980 | 2895 | 3628 |
| No log | 1.99 | 145 | 0.9010 | 55.8534 | 42.0605 | 54.3596 | 54.3148 | 14.6586 | 0.3076 | [0.6021433355659745, 0.41167608286252355, 0.3253012048192771, 0.26241846462619167] | 0.8065 | 0.8230 | 2986 | 3628 |
| No log | 3.0 | 218 | 0.8767 | 57.7174 | 44.1283 | 56.4402 | 56.3292 | 14.5136 | 0.3323 | [0.6361781706902414, 0.4509578544061303, 0.36287845546292236, 0.2982546201232033] | 0.7917 | 0.8106 | 2941 | 3628 |
| No log | 4.0 | 291 | 0.8583 | 60.2113 | 47.3135 | 58.8257 | 58.7408 | 14.3233 | 0.3580 | [0.6711758584807492, 0.49490595611285265, 0.4074741107609185, 0.3412698412698413] | 0.7723 | 0.7947 | 2883 | 3628 |
| No log | 4.99 | 363 | 0.8396 | 59.8588 | 46.8718 | 58.3234 | 58.2478 | 14.4894 | 0.3539 | [0.6580469547465124, 0.47929447852760737, 0.39042599912165127, 0.32528263103802674] | 0.7910 | 0.8101 | 2939 | 3628 |
| No log | 5.99 | 436 | 0.8316 | 59.7653 | 46.5459 | 58.066 | 58.1354 | 14.4804 | 0.3548 | [0.6613342409802587, 0.4798619102416571, 0.3914762741652021, 0.3264781491002571] | 0.7907 | 0.8098 | 2938 | 3628 |
| 0.9411 | 7.0 | 509 | 0.8305 | 60.2858 | 47.0551 | 58.5541 | 58.5986 | 14.6254 | 0.3585 | [0.6612685560053981, 0.4800607671857197, 0.39139878366637704, 0.3257229832572298] | 0.7993 | 0.8170 | 2964 | 3628 |
| 0.9411 | 7.92 | 576 | 0.8309 | 60.2226 | 47.1068 | 58.611 | 58.5902 | 14.6526 | 0.3605 | [0.6590450571620713, 0.4801362088535755, 0.39273356401384085, 0.3276123170116103] | 0.8026 | 0.8197 | 2974 | 3628 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
NeuNav/poca-SoccerTwos
|
NeuNav
| 2024-02-02T20:09:44Z | 16 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] |
reinforcement-learning
| 2024-02-02T20:09:28Z |
---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: NeuNav/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Patcas/codet5-no-doc-new-v1
|
Patcas
| 2024-02-02T20:07:17Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-base",
"base_model:finetune:Salesforce/codet5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T18:50:36Z |
---
license: apache-2.0
base_model: Salesforce/codet5-base
tags:
- generated_from_trainer
model-index:
- name: codet5-no-doc-new-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-no-doc-new-v1
This model is a fine-tuned version of [Salesforce/codet5-base](https://huggingface.co/Salesforce/codet5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 230 | 1.7597 |
| No log | 2.0 | 460 | 1.4954 |
| 2.0602 | 3.0 | 690 | 1.3798 |
| 2.0602 | 4.0 | 920 | 1.3298 |
| 1.1099 | 5.0 | 1150 | 1.3249 |
| 1.1099 | 6.0 | 1380 | 1.2761 |
| 0.8099 | 7.0 | 1610 | 1.2832 |
| 0.8099 | 8.0 | 1840 | 1.2702 |
| 0.6516 | 9.0 | 2070 | 1.2734 |
| 0.6516 | 10.0 | 2300 | 1.2705 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
jak414/liedetect_fold2
|
jak414
| 2024-02-02T20:03:45Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-11T22:38:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
ameerazam08/DiffSynth-Studio
|
ameerazam08
| 2024-02-02T20:00:54Z | 0 | 8 | null |
[
"arxiv:2401.16224",
"region:us"
] | null | 2024-02-02T19:55:39Z |
# DiffSynth Studio
## Introduction
DiffSynth is a new Diffusion engine. We have restructured architectures including Text Encoder, UNet, VAE, among others, maintaining compatibility with models from the open-source community while enhancing computational performance. This version is currently in its initial stage, supporting SD and SDXL architectures. In the future, we plan to develop more interesting features based on this new codebase.
## Installation
Create Python environment:
```
conda env create -f environment.yml
```
We find that sometimes `conda` cannot install `cupy` correctly, please install it manually. See [this document](https://docs.cupy.dev/en/stable/install.html) for more details.
Enter the Python environment:
```
conda activate DiffSynthStudio
```
## Usage (in WebUI)
```
python -m streamlit run Diffsynth_Studio.py
```
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/93085557-73f3-4eee-a205-9829591ef954
## Usage (in Python code)
### Example 1: Stable Diffusion
We can generate images with very high resolution. Please see `examples/sd_text_to_image.py` for more details.
|512*512|1024*1024|2048*2048|4096*4096|
|-|-|-|-|
|||||
### Example 2: Stable Diffusion XL
Generate images with Stable Diffusion XL. Please see `examples/sdxl_text_to_image.py` for more details.
|1024*1024|2048*2048|
|-|-|
|||
### Example 3: Stable Diffusion XL Turbo
Generate images with Stable Diffusion XL Turbo. You can see `examples/sdxl_turbo.py` for more details, but we highly recommend you to use it in the WebUI.
|"black car"|"red car"|
|-|-|
|||
### Example 4: Toon Shading (Diffutoon)
This example is implemented based on [Diffutoon](https://arxiv.org/abs/2401.16224). This approach is adept for rendering high-resoluton videos with rapid motion. You can easily modify the parameters in the config dict. See `examples/diffutoon_toon_shading.py`.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/b54c05c5-d747-4709-be5e-b39af82404dd
### Example 5: Toon Shading with Editing Signals (Diffutoon)
Coming soon.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/20528af5-5100-474a-8cdc-440b9efdd86c
### Example 6: Toon Shading (in native Python code)
This example is provided for developers. If you don't want to use the config to manage parameters, you can see `examples/sd_toon_shading.py` to learn how to use it in native Python code.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/607c199b-6140-410b-a111-3e4ffb01142c
### Example 7: Text to Video
Given a prompt, DiffSynth Studio can generate a video using a Stable Diffusion model and an AnimateDiff model. We can break the limitation of number of frames! See `examples/sd_text_to_video.py`.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/8f556355-4079-4445-9b48-e9da77699437
### Example 8: Video Stylization
We provide an example for video stylization. In this pipeline, the rendered video is completely different from the original video, thus we need a powerful deflickering algorithm. We use FastBlend to implement the deflickering module. Please see `examples/sd_video_rerender.py` for more details.
https://github.com/Artiprocher/DiffSynth-Studio/assets/35051019/59fb2f7b-8de0-4481-b79f-0c3a7361a1ea
### Example 9: Prompt Processing
If you are not native English user, we provide translation service for you. Our prompter can translate other language to English and refine it using "BeautifulPrompt" models. Please see `examples/sd_prompt_refining.py` for more details.
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English.
|seed=0|seed=1|seed=2|seed=3|
|-|-|-|-|
|||||
Prompt: "一个漂亮的女孩". The [translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) will translate it to English. Then the [refining model](https://huggingface.co/alibaba-pai/pai-bloom-1b1-text2prompt-sd) will refine the translated prompt for better visual quality.
|seed=0|seed=1|seed=2|seed=3|
|-|-|-|-|
|||||
|
jlbaker361/dcgan-lazy-wikiart500-clip-resized-0
|
jlbaker361
| 2024-02-02T19:59:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-02-01T20:49:23Z |
---
{}
---
Creative Adversarial Network
epochs: 2
dataset jlbaker361/wikiart-balanced500
n classes 27
batch_size 4
images where resized to 768
and then center cropped to: 512
used clip=True
conditional =False
discriminator parameters:
init_dim: 32
final_dim 512
generator parameters:
input noise_dim: 100
|
joshberg65/mistral_7b_Rassle
|
joshberg65
| 2024-02-02T19:48:21Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-31T21:29:09Z |
---
license: apache-2.0
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
More information coming soon! I've trained this model on pro wrestling results and information, on top of the base Mistral 7B model and the Guanaco dataset. The final version of this model will be a pro wrestling and sports entertainment guru!
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mtc/mistralai-Mistral-7B-v0.1-7b-xsum-with-all-explanations-5-epochs-full-dataset-lora-full
|
mtc
| 2024-02-02T19:39:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-02-02T19:39:25Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
RohanKumarMishra/fine_tuneing_2
|
RohanKumarMishra
| 2024-02-02T19:39:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-02-02T19:00:13Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2
|
nrivkin/sd-class-butterflies-32
|
nrivkin
| 2024-02-02T19:33:45Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-02-02T19:33:38Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('nrivkin/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
esahit/t5-medical-text-simplification
|
esahit
| 2024-02-02T19:32:04Z | 22 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:mrm8488/t5-small-finetuned-text-simplification",
"base_model:finetune:mrm8488/t5-small-finetuned-text-simplification",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-02T18:15:55Z |
---
license: apache-2.0
base_model: mrm8488/t5-small-finetuned-text-simplification
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-medical-text-simplification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-medical-text-simplification
This model is a fine-tuned version of [mrm8488/t5-small-finetuned-text-simplification](https://huggingface.co/mrm8488/t5-small-finetuned-text-simplification) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4158
- Bleu: {'bleu': 0.24913061085239344, 'precisions': [0.6300697552884507, 0.46170603353322726, 0.3783389479827051, 0.3190805662507599], 'brevity_penalty': 0.5754971743889961, 'length_ratio': 0.6441136869219061, 'translation_length': 44011, 'reference_length': 68328}
- Sari: {'sari': 21.772869578730884}
- Fkgl: 10.2474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Sari | Fkgl |
|:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:----------------------------:|:-------:|
| 1.5524 | 1.0 | 1578 | 1.4317 | {'bleu': 0.24854970426705067, 'precisions': [0.626776178839714, 0.45794346978557504, 0.37443247809101465, 0.3154227136604469], 'brevity_penalty': 0.5792493345645447, 'length_ratio': 0.646821215314366, 'translation_length': 44196, 'reference_length': 68328} | {'sari': 21.542679628603977} | 10.2949 |
| 1.5282 | 2.0 | 3156 | 1.4249 | {'bleu': 0.24886563197246125, 'precisions': [0.6285792076961474, 0.4604086221222934, 0.3770192256766061, 0.3176616771658094], 'brevity_penalty': 0.5767757332645675, 'length_ratio': 0.6450357101042032, 'translation_length': 44074, 'reference_length': 68328} | {'sari': 21.665573517166536} | 10.2937 |
| 1.4997 | 3.0 | 4734 | 1.4176 | {'bleu': 0.24852094682922746, 'precisions': [0.629403208945048, 0.4605591734808794, 0.377421066595914, 0.3182660566398332], 'brevity_penalty': 0.5753144561890373, 'length_ratio': 0.6439819693244351, 'translation_length': 44002, 'reference_length': 68328} | {'sari': 21.700716936778782} | 10.2544 |
| 1.5028 | 4.0 | 6312 | 1.4176 | {'bleu': 0.24876653336273433, 'precisions': [0.6299538437052363, 0.4615309246785058, 0.37816241471767237, 0.3188943296728769], 'brevity_penalty': 0.5748880487421792, 'length_ratio': 0.6436746282636694, 'translation_length': 43981, 'reference_length': 68328} | {'sari': 21.750120178010484} | 10.2531 |
| 1.4976 | 5.0 | 7890 | 1.4158 | {'bleu': 0.24913061085239344, 'precisions': [0.6300697552884507, 0.46170603353322726, 0.3783389479827051, 0.3190805662507599], 'brevity_penalty': 0.5754971743889961, 'length_ratio': 0.6441136869219061, 'translation_length': 44011, 'reference_length': 68328} | {'sari': 21.772869578730884} | 10.2474 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
SudiptoPramanik/Mistral_Gen_Gen_ExtractiveSummary
|
SudiptoPramanik
| 2024-02-02T19:28:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T19:10:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
meetplace1/bertsmallclassifier100
|
meetplace1
| 2024-02-02T19:28:19Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-02T18:59:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kwanok/Llama-2-daangn-7b
|
kwanok
| 2024-02-02T19:23:14Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-02T16:20:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kakashiCopyNinja/ft_Llama-2
|
kakashiCopyNinja
| 2024-02-02T19:07:09Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-16T16:33:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LoneStriker/Synatra-Mixtral-8x7B-GGUF
|
LoneStriker
| 2024-02-02T18:57:17Z | 1 | 1 | null |
[
"gguf",
"moe",
"ko",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-02-02T16:57:59Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- moe
---
# **Synatra-Mixtral-8x7B**
<img src="./Synatra-Mixtral.png" alt="Synatra-Mixtral-8x7B" width="512"/>
**Synatra-Mixtral-8x7B** is a fine-tuned version of the Mixtral-8x7B-Instruct-v0.1 model using **Korean** datasets.
This model features overwhelmingly superior comprehension and inference capabilities and is licensed under apache-2.0.
# **Join Our Discord**
[Server Link](https://discord.gg/MrBt3PXdXc)
# **License**
**OPEN**, Apache-2.0.
# **Model Details**
**Base Model**
[mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
**Trained On**
A100 80GB * 6
**Instruction format**
It follows **Alpaca** format.
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{input}
### Response:
{output}
```
# **Model Benchmark**
TBD
# **Implementation Code**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Mixtral-8x7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Mixtral-8x7B")
messages = [
{"role": "user", "content": "아인슈타인의 상대성이론에 대해서 자세히 설명해줘."},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
# **Author's Message**
This model's training got sponsered by no one but support from people around Earth.
[Support Me](https://www.buymeacoffee.com/mwell)
Contact Me on Discord - **is.maywell**
Follow me on twitter: https://twitter.com/stablefluffy
|
kviai/KviGPT-7b-Chat
|
kviai
| 2024-02-02T18:49:49Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"LLM",
"Chat",
"KVIGPT",
"Llama",
"Lora",
"KVIAI",
"en",
"ru",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-13T16:39:18Z |
---
license: cc-by-sa-4.0
language:
- en
- ru
pipeline_tag: text-generation
tags:
- LLM
- Chat
- KVIGPT
- Llama
- Lora
- KVIAI
library_name: transformers
---
# KviGPT 7b
KviGPT - powerful LLM text generation model.
## Usage
You can use KVIGPT using transformers library, here it is how:
```Python
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("kviai/KviGPT-7b-Chat")
model = AutoModelForCausalLM.from_pretrained("kviai/KviGPT-7b-Chat")
prompt = "Hi, what do you know about TON coin?"
output = pipeline(prompt)
print(output)
```
## Model Details
You can train it using Amazon SageMaker or Auto Train
## Credits
- **Developed by:** KviAI
- **Funded byu:** Katsyka Vasiliy
- **Model type:** Text Generation
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution Share Alike 4.0
## Demo
- **Demo:** [https://hf.co/spaces/kviai/kvigpt]
|
guirnd/ppo-SnowballTarget
|
guirnd
| 2024-02-02T18:49:25Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-02-02T18:49:17Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: guirnd/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.