modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-04 06:29:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 550
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-04 06:26:08
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
aisuko/ft-google-gemma-2b-it-qlora
|
aisuko
| 2024-03-07T03:25:53Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-2b-it",
"base_model:adapter:google/gemma-2b-it",
"license:other",
"region:us"
] | null | 2024-03-06T01:03:43Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b-it
model-index:
- name: ft-google-gemma-2b-it-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-google-gemma-2b-it-qlora
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5909
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1198 | 3.0 | 3 | 2.6224 |
| 0.0479 | 6.0 | 6 | 2.4699 |
| 0.0108 | 9.0 | 9 | 2.5909 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_last1
|
OwOOwO
| 2024-03-07T03:22:11Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T03:19:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/BERT-legal-de-cased_German_legal_SQuAD_1000
|
farid1088
| 2024-03-07T03:11:31Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T13:44:57Z |
---
tags:
- generated_from_trainer
model-index:
- name: BERT-legal-de-cased_German_legal_SQuAD_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-legal-de-cased_German_legal_SQuAD_1000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.1717 |
| No log | 2.0 | 4 | 6.1711 |
| No log | 3.0 | 6 | 6.1753 |
| No log | 4.0 | 8 | 6.0783 |
| No log | 5.0 | 10 | 5.7088 |
| No log | 6.0 | 12 | 5.4121 |
| No log | 7.0 | 14 | 5.0754 |
| No log | 8.0 | 16 | 4.8317 |
| No log | 9.0 | 18 | 4.5938 |
| No log | 10.0 | 20 | 4.3498 |
| No log | 11.0 | 22 | 4.1427 |
| No log | 12.0 | 24 | 3.9210 |
| No log | 13.0 | 26 | 3.6815 |
| No log | 14.0 | 28 | 3.4737 |
| No log | 15.0 | 30 | 3.2730 |
| No log | 16.0 | 32 | 3.1755 |
| No log | 17.0 | 34 | 3.0722 |
| No log | 18.0 | 36 | 2.9440 |
| No log | 19.0 | 38 | 2.7475 |
| No log | 20.0 | 40 | 2.5234 |
| No log | 21.0 | 42 | 2.4431 |
| No log | 22.0 | 44 | 2.2528 |
| No log | 23.0 | 46 | 2.2330 |
| No log | 24.0 | 48 | 1.9518 |
| No log | 25.0 | 50 | 1.8298 |
| No log | 26.0 | 52 | 1.7587 |
| No log | 27.0 | 54 | 1.6591 |
| No log | 28.0 | 56 | 1.7479 |
| No log | 29.0 | 58 | 1.4854 |
| No log | 30.0 | 60 | 1.5093 |
| No log | 31.0 | 62 | 1.4208 |
| No log | 32.0 | 64 | 1.2692 |
| No log | 33.0 | 66 | 1.4203 |
| No log | 34.0 | 68 | 1.2894 |
| No log | 35.0 | 70 | 1.2888 |
| No log | 36.0 | 72 | 1.2410 |
| No log | 37.0 | 74 | 1.1695 |
| No log | 38.0 | 76 | 1.2593 |
| No log | 39.0 | 78 | 1.1525 |
| No log | 40.0 | 80 | 1.1403 |
| No log | 41.0 | 82 | 1.0884 |
| No log | 42.0 | 84 | 1.0839 |
| No log | 43.0 | 86 | 1.1500 |
| No log | 44.0 | 88 | 1.1241 |
| No log | 45.0 | 90 | 1.1409 |
| No log | 46.0 | 92 | 1.1392 |
| No log | 47.0 | 94 | 1.1837 |
| No log | 48.0 | 96 | 1.1322 |
| No log | 49.0 | 98 | 1.1780 |
| No log | 50.0 | 100 | 1.1311 |
| No log | 51.0 | 102 | 1.1044 |
| No log | 52.0 | 104 | 1.1809 |
| No log | 53.0 | 106 | 1.1250 |
| No log | 54.0 | 108 | 1.0819 |
| No log | 55.0 | 110 | 1.1265 |
| No log | 56.0 | 112 | 1.1851 |
| No log | 57.0 | 114 | 1.1316 |
| No log | 58.0 | 116 | 1.1193 |
| No log | 59.0 | 118 | 1.1946 |
| No log | 60.0 | 120 | 1.1613 |
| No log | 61.0 | 122 | 1.1686 |
| No log | 62.0 | 124 | 1.1920 |
| No log | 63.0 | 126 | 1.1830 |
| No log | 64.0 | 128 | 1.1377 |
| No log | 65.0 | 130 | 1.1072 |
| No log | 66.0 | 132 | 1.1467 |
| No log | 67.0 | 134 | 1.1622 |
| No log | 68.0 | 136 | 1.2440 |
| No log | 69.0 | 138 | 1.2474 |
| No log | 70.0 | 140 | 1.1925 |
| No log | 71.0 | 142 | 1.1580 |
| No log | 72.0 | 144 | 1.0943 |
| No log | 73.0 | 146 | 1.1697 |
| No log | 74.0 | 148 | 1.2091 |
| No log | 75.0 | 150 | 1.2232 |
| No log | 76.0 | 152 | 1.1534 |
| No log | 77.0 | 154 | 1.0206 |
| No log | 78.0 | 156 | 1.0538 |
| No log | 79.0 | 158 | 1.1297 |
| No log | 80.0 | 160 | 1.2153 |
| No log | 81.0 | 162 | 1.2081 |
| No log | 82.0 | 164 | 1.1423 |
| No log | 83.0 | 166 | 1.0702 |
| No log | 84.0 | 168 | 1.0416 |
| No log | 85.0 | 170 | 1.1162 |
| No log | 86.0 | 172 | 1.1964 |
| No log | 87.0 | 174 | 1.2508 |
| No log | 88.0 | 176 | 1.2248 |
| No log | 89.0 | 178 | 1.1240 |
| No log | 90.0 | 180 | 1.0029 |
| No log | 91.0 | 182 | 0.9359 |
| No log | 92.0 | 184 | 0.9876 |
| No log | 93.0 | 186 | 1.1028 |
| No log | 94.0 | 188 | 1.2150 |
| No log | 95.0 | 190 | 1.2546 |
| No log | 96.0 | 192 | 1.2656 |
| No log | 97.0 | 194 | 1.2426 |
| No log | 98.0 | 196 | 1.1099 |
| No log | 99.0 | 198 | 1.0726 |
| No log | 100.0 | 200 | 1.1013 |
| No log | 101.0 | 202 | 1.1394 |
| No log | 102.0 | 204 | 1.2147 |
| No log | 103.0 | 206 | 1.2634 |
| No log | 104.0 | 208 | 1.2789 |
| No log | 105.0 | 210 | 1.2354 |
| No log | 106.0 | 212 | 1.1620 |
| No log | 107.0 | 214 | 1.1166 |
| No log | 108.0 | 216 | 1.1195 |
| No log | 109.0 | 218 | 1.1365 |
| No log | 110.0 | 220 | 1.1633 |
| No log | 111.0 | 222 | 1.1790 |
| No log | 112.0 | 224 | 1.1807 |
| No log | 113.0 | 226 | 1.1756 |
| No log | 114.0 | 228 | 1.1535 |
| No log | 115.0 | 230 | 1.1405 |
| No log | 116.0 | 232 | 1.0871 |
| No log | 117.0 | 234 | 1.0808 |
| No log | 118.0 | 236 | 1.1251 |
| No log | 119.0 | 238 | 1.1709 |
| No log | 120.0 | 240 | 1.2456 |
| No log | 121.0 | 242 | 1.3081 |
| No log | 122.0 | 244 | 1.3189 |
| No log | 123.0 | 246 | 1.3107 |
| No log | 124.0 | 248 | 1.2764 |
| No log | 125.0 | 250 | 1.2323 |
| No log | 126.0 | 252 | 1.1916 |
| No log | 127.0 | 254 | 1.1873 |
| No log | 128.0 | 256 | 1.2156 |
| No log | 129.0 | 258 | 1.2442 |
| No log | 130.0 | 260 | 1.2875 |
| No log | 131.0 | 262 | 1.3244 |
| No log | 132.0 | 264 | 1.3403 |
| No log | 133.0 | 266 | 1.3596 |
| No log | 134.0 | 268 | 1.3588 |
| No log | 135.0 | 270 | 1.3378 |
| No log | 136.0 | 272 | 1.3133 |
| No log | 137.0 | 274 | 1.3000 |
| No log | 138.0 | 276 | 1.3190 |
| No log | 139.0 | 278 | 1.3629 |
| No log | 140.0 | 280 | 1.4268 |
| No log | 141.0 | 282 | 1.3962 |
| No log | 142.0 | 284 | 1.3755 |
| No log | 143.0 | 286 | 1.3570 |
| No log | 144.0 | 288 | 1.3079 |
| No log | 145.0 | 290 | 1.2731 |
| No log | 146.0 | 292 | 1.2619 |
| No log | 147.0 | 294 | 1.2788 |
| No log | 148.0 | 296 | 1.2703 |
| No log | 149.0 | 298 | 1.3041 |
| No log | 150.0 | 300 | 1.3488 |
| No log | 151.0 | 302 | 1.3166 |
| No log | 152.0 | 304 | 1.2705 |
| No log | 153.0 | 306 | 1.2645 |
| No log | 154.0 | 308 | 1.2632 |
| No log | 155.0 | 310 | 1.2695 |
| No log | 156.0 | 312 | 1.3069 |
| No log | 157.0 | 314 | 1.3602 |
| No log | 158.0 | 316 | 1.4116 |
| No log | 159.0 | 318 | 1.4162 |
| No log | 160.0 | 320 | 1.3981 |
| No log | 161.0 | 322 | 1.3789 |
| No log | 162.0 | 324 | 1.3521 |
| No log | 163.0 | 326 | 1.3153 |
| No log | 164.0 | 328 | 1.2917 |
| No log | 165.0 | 330 | 1.3027 |
| No log | 166.0 | 332 | 1.3019 |
| No log | 167.0 | 334 | 1.3501 |
| No log | 168.0 | 336 | 1.3815 |
| No log | 169.0 | 338 | 1.4005 |
| No log | 170.0 | 340 | 1.4076 |
| No log | 171.0 | 342 | 1.4337 |
| No log | 172.0 | 344 | 1.4134 |
| No log | 173.0 | 346 | 1.3692 |
| No log | 174.0 | 348 | 1.3043 |
| No log | 175.0 | 350 | 1.3033 |
| No log | 176.0 | 352 | 1.2741 |
| No log | 177.0 | 354 | 1.2467 |
| No log | 178.0 | 356 | 1.2419 |
| No log | 179.0 | 358 | 1.2418 |
| No log | 180.0 | 360 | 1.2855 |
| No log | 181.0 | 362 | 1.3570 |
| No log | 182.0 | 364 | 1.3163 |
| No log | 183.0 | 366 | 1.2782 |
| No log | 184.0 | 368 | 1.2494 |
| No log | 185.0 | 370 | 1.2303 |
| No log | 186.0 | 372 | 1.2785 |
| No log | 187.0 | 374 | 1.3253 |
| No log | 188.0 | 376 | 1.3255 |
| No log | 189.0 | 378 | 1.3098 |
| No log | 190.0 | 380 | 1.2672 |
| No log | 191.0 | 382 | 1.2722 |
| No log | 192.0 | 384 | 1.2446 |
| No log | 193.0 | 386 | 1.2054 |
| No log | 194.0 | 388 | 1.2942 |
| No log | 195.0 | 390 | 1.3152 |
| No log | 196.0 | 392 | 1.3020 |
| No log | 197.0 | 394 | 1.2378 |
| No log | 198.0 | 396 | 1.2489 |
| No log | 199.0 | 398 | 1.2738 |
| No log | 200.0 | 400 | 1.3131 |
| No log | 201.0 | 402 | 1.3321 |
| No log | 202.0 | 404 | 1.3320 |
| No log | 203.0 | 406 | 1.2761 |
| No log | 204.0 | 408 | 1.1996 |
| No log | 205.0 | 410 | 1.2253 |
| No log | 206.0 | 412 | 1.2541 |
| No log | 207.0 | 414 | 1.2715 |
| No log | 208.0 | 416 | 1.3436 |
| No log | 209.0 | 418 | 1.3600 |
| No log | 210.0 | 420 | 1.3202 |
| No log | 211.0 | 422 | 1.3058 |
| No log | 212.0 | 424 | 1.3090 |
| No log | 213.0 | 426 | 1.3002 |
| No log | 214.0 | 428 | 1.2675 |
| No log | 215.0 | 430 | 1.2168 |
| No log | 216.0 | 432 | 1.2380 |
| No log | 217.0 | 434 | 1.2782 |
| No log | 218.0 | 436 | 1.3068 |
| No log | 219.0 | 438 | 1.3440 |
| No log | 220.0 | 440 | 1.4507 |
| No log | 221.0 | 442 | 1.5081 |
| No log | 222.0 | 444 | 1.5281 |
| No log | 223.0 | 446 | 1.5220 |
| No log | 224.0 | 448 | 1.4787 |
| No log | 225.0 | 450 | 1.4162 |
| No log | 226.0 | 452 | 1.3667 |
| No log | 227.0 | 454 | 1.3059 |
| No log | 228.0 | 456 | 1.2619 |
| No log | 229.0 | 458 | 1.2453 |
| No log | 230.0 | 460 | 1.2663 |
| No log | 231.0 | 462 | 1.3289 |
| No log | 232.0 | 464 | 1.3786 |
| No log | 233.0 | 466 | 1.4200 |
| No log | 234.0 | 468 | 1.4380 |
| No log | 235.0 | 470 | 1.4132 |
| No log | 236.0 | 472 | 1.4106 |
| No log | 237.0 | 474 | 1.4144 |
| No log | 238.0 | 476 | 1.4103 |
| No log | 239.0 | 478 | 1.4326 |
| No log | 240.0 | 480 | 1.4541 |
| No log | 241.0 | 482 | 1.4311 |
| No log | 242.0 | 484 | 1.3857 |
| No log | 243.0 | 486 | 1.3441 |
| No log | 244.0 | 488 | 1.3168 |
| No log | 245.0 | 490 | 1.3213 |
| No log | 246.0 | 492 | 1.3249 |
| No log | 247.0 | 494 | 1.3711 |
| No log | 248.0 | 496 | 1.4147 |
| No log | 249.0 | 498 | 1.4426 |
| 0.7848 | 250.0 | 500 | 1.4317 |
| 0.7848 | 251.0 | 502 | 1.3764 |
| 0.7848 | 252.0 | 504 | 1.3693 |
| 0.7848 | 253.0 | 506 | 1.4386 |
| 0.7848 | 254.0 | 508 | 1.5083 |
| 0.7848 | 255.0 | 510 | 1.5463 |
| 0.7848 | 256.0 | 512 | 1.5666 |
| 0.7848 | 257.0 | 514 | 1.5593 |
| 0.7848 | 258.0 | 516 | 1.4716 |
| 0.7848 | 259.0 | 518 | 1.4204 |
| 0.7848 | 260.0 | 520 | 1.4898 |
| 0.7848 | 261.0 | 522 | 1.4954 |
| 0.7848 | 262.0 | 524 | 1.5118 |
| 0.7848 | 263.0 | 526 | 1.5007 |
| 0.7848 | 264.0 | 528 | 1.4358 |
| 0.7848 | 265.0 | 530 | 1.4149 |
| 0.7848 | 266.0 | 532 | 1.3814 |
| 0.7848 | 267.0 | 534 | 1.3725 |
| 0.7848 | 268.0 | 536 | 1.4130 |
| 0.7848 | 269.0 | 538 | 1.4104 |
| 0.7848 | 270.0 | 540 | 1.4160 |
| 0.7848 | 271.0 | 542 | 1.4233 |
| 0.7848 | 272.0 | 544 | 1.4008 |
| 0.7848 | 273.0 | 546 | 1.3969 |
| 0.7848 | 274.0 | 548 | 1.3843 |
| 0.7848 | 275.0 | 550 | 1.3700 |
| 0.7848 | 276.0 | 552 | 1.3677 |
| 0.7848 | 277.0 | 554 | 1.4000 |
| 0.7848 | 278.0 | 556 | 1.4446 |
| 0.7848 | 279.0 | 558 | 1.4595 |
| 0.7848 | 280.0 | 560 | 1.4859 |
| 0.7848 | 281.0 | 562 | 1.5271 |
| 0.7848 | 282.0 | 564 | 1.5535 |
| 0.7848 | 283.0 | 566 | 1.5690 |
| 0.7848 | 284.0 | 568 | 1.5768 |
| 0.7848 | 285.0 | 570 | 1.5826 |
| 0.7848 | 286.0 | 572 | 1.5761 |
| 0.7848 | 287.0 | 574 | 1.5642 |
| 0.7848 | 288.0 | 576 | 1.5660 |
| 0.7848 | 289.0 | 578 | 1.5839 |
| 0.7848 | 290.0 | 580 | 1.5806 |
| 0.7848 | 291.0 | 582 | 1.5580 |
| 0.7848 | 292.0 | 584 | 1.5059 |
| 0.7848 | 293.0 | 586 | 1.4607 |
| 0.7848 | 294.0 | 588 | 1.4186 |
| 0.7848 | 295.0 | 590 | 1.3715 |
| 0.7848 | 296.0 | 592 | 1.3236 |
| 0.7848 | 297.0 | 594 | 1.2923 |
| 0.7848 | 298.0 | 596 | 1.2989 |
| 0.7848 | 299.0 | 598 | 1.3184 |
| 0.7848 | 300.0 | 600 | 1.3363 |
| 0.7848 | 301.0 | 602 | 1.3637 |
| 0.7848 | 302.0 | 604 | 1.4197 |
| 0.7848 | 303.0 | 606 | 1.4449 |
| 0.7848 | 304.0 | 608 | 1.4422 |
| 0.7848 | 305.0 | 610 | 1.4147 |
| 0.7848 | 306.0 | 612 | 1.3678 |
| 0.7848 | 307.0 | 614 | 1.3370 |
| 0.7848 | 308.0 | 616 | 1.3288 |
| 0.7848 | 309.0 | 618 | 1.3449 |
| 0.7848 | 310.0 | 620 | 1.3458 |
| 0.7848 | 311.0 | 622 | 1.3237 |
| 0.7848 | 312.0 | 624 | 1.3114 |
| 0.7848 | 313.0 | 626 | 1.2934 |
| 0.7848 | 314.0 | 628 | 1.2732 |
| 0.7848 | 315.0 | 630 | 1.2638 |
| 0.7848 | 316.0 | 632 | 1.2604 |
| 0.7848 | 317.0 | 634 | 1.2501 |
| 0.7848 | 318.0 | 636 | 1.2382 |
| 0.7848 | 319.0 | 638 | 1.2541 |
| 0.7848 | 320.0 | 640 | 1.2850 |
| 0.7848 | 321.0 | 642 | 1.2946 |
| 0.7848 | 322.0 | 644 | 1.3294 |
| 0.7848 | 323.0 | 646 | 1.3795 |
| 0.7848 | 324.0 | 648 | 1.4286 |
| 0.7848 | 325.0 | 650 | 1.4556 |
| 0.7848 | 326.0 | 652 | 1.4711 |
| 0.7848 | 327.0 | 654 | 1.4741 |
| 0.7848 | 328.0 | 656 | 1.4630 |
| 0.7848 | 329.0 | 658 | 1.4480 |
| 0.7848 | 330.0 | 660 | 1.4296 |
| 0.7848 | 331.0 | 662 | 1.4217 |
| 0.7848 | 332.0 | 664 | 1.4218 |
| 0.7848 | 333.0 | 666 | 1.4153 |
| 0.7848 | 334.0 | 668 | 1.4132 |
| 0.7848 | 335.0 | 670 | 1.4486 |
| 0.7848 | 336.0 | 672 | 1.4687 |
| 0.7848 | 337.0 | 674 | 1.4784 |
| 0.7848 | 338.0 | 676 | 1.4862 |
| 0.7848 | 339.0 | 678 | 1.4815 |
| 0.7848 | 340.0 | 680 | 1.4714 |
| 0.7848 | 341.0 | 682 | 1.4610 |
| 0.7848 | 342.0 | 684 | 1.4427 |
| 0.7848 | 343.0 | 686 | 1.4226 |
| 0.7848 | 344.0 | 688 | 1.4136 |
| 0.7848 | 345.0 | 690 | 1.4082 |
| 0.7848 | 346.0 | 692 | 1.3978 |
| 0.7848 | 347.0 | 694 | 1.3757 |
| 0.7848 | 348.0 | 696 | 1.3628 |
| 0.7848 | 349.0 | 698 | 1.3472 |
| 0.7848 | 350.0 | 700 | 1.3555 |
| 0.7848 | 351.0 | 702 | 1.3794 |
| 0.7848 | 352.0 | 704 | 1.4010 |
| 0.7848 | 353.0 | 706 | 1.4201 |
| 0.7848 | 354.0 | 708 | 1.4221 |
| 0.7848 | 355.0 | 710 | 1.4147 |
| 0.7848 | 356.0 | 712 | 1.4033 |
| 0.7848 | 357.0 | 714 | 1.3899 |
| 0.7848 | 358.0 | 716 | 1.3824 |
| 0.7848 | 359.0 | 718 | 1.3796 |
| 0.7848 | 360.0 | 720 | 1.3787 |
| 0.7848 | 361.0 | 722 | 1.3877 |
| 0.7848 | 362.0 | 724 | 1.3969 |
| 0.7848 | 363.0 | 726 | 1.4222 |
| 0.7848 | 364.0 | 728 | 1.4430 |
| 0.7848 | 365.0 | 730 | 1.4684 |
| 0.7848 | 366.0 | 732 | 1.4931 |
| 0.7848 | 367.0 | 734 | 1.5098 |
| 0.7848 | 368.0 | 736 | 1.5248 |
| 0.7848 | 369.0 | 738 | 1.5321 |
| 0.7848 | 370.0 | 740 | 1.5295 |
| 0.7848 | 371.0 | 742 | 1.5166 |
| 0.7848 | 372.0 | 744 | 1.4944 |
| 0.7848 | 373.0 | 746 | 1.4734 |
| 0.7848 | 374.0 | 748 | 1.4471 |
| 0.7848 | 375.0 | 750 | 1.4311 |
| 0.7848 | 376.0 | 752 | 1.4246 |
| 0.7848 | 377.0 | 754 | 1.4219 |
| 0.7848 | 378.0 | 756 | 1.4135 |
| 0.7848 | 379.0 | 758 | 1.3978 |
| 0.7848 | 380.0 | 760 | 1.3815 |
| 0.7848 | 381.0 | 762 | 1.3677 |
| 0.7848 | 382.0 | 764 | 1.3604 |
| 0.7848 | 383.0 | 766 | 1.3502 |
| 0.7848 | 384.0 | 768 | 1.3372 |
| 0.7848 | 385.0 | 770 | 1.3226 |
| 0.7848 | 386.0 | 772 | 1.3116 |
| 0.7848 | 387.0 | 774 | 1.2846 |
| 0.7848 | 388.0 | 776 | 1.2601 |
| 0.7848 | 389.0 | 778 | 1.2552 |
| 0.7848 | 390.0 | 780 | 1.2723 |
| 0.7848 | 391.0 | 782 | 1.2866 |
| 0.7848 | 392.0 | 784 | 1.3037 |
| 0.7848 | 393.0 | 786 | 1.3170 |
| 0.7848 | 394.0 | 788 | 1.3313 |
| 0.7848 | 395.0 | 790 | 1.3407 |
| 0.7848 | 396.0 | 792 | 1.3527 |
| 0.7848 | 397.0 | 794 | 1.3666 |
| 0.7848 | 398.0 | 796 | 1.3755 |
| 0.7848 | 399.0 | 798 | 1.3788 |
| 0.7848 | 400.0 | 800 | 1.4101 |
| 0.7848 | 401.0 | 802 | 1.4477 |
| 0.7848 | 402.0 | 804 | 1.4682 |
| 0.7848 | 403.0 | 806 | 1.4731 |
| 0.7848 | 404.0 | 808 | 1.4577 |
| 0.7848 | 405.0 | 810 | 1.4387 |
| 0.7848 | 406.0 | 812 | 1.4221 |
| 0.7848 | 407.0 | 814 | 1.4069 |
| 0.7848 | 408.0 | 816 | 1.3935 |
| 0.7848 | 409.0 | 818 | 1.3736 |
| 0.7848 | 410.0 | 820 | 1.3555 |
| 0.7848 | 411.0 | 822 | 1.3283 |
| 0.7848 | 412.0 | 824 | 1.2969 |
| 0.7848 | 413.0 | 826 | 1.2819 |
| 0.7848 | 414.0 | 828 | 1.2790 |
| 0.7848 | 415.0 | 830 | 1.2800 |
| 0.7848 | 416.0 | 832 | 1.2791 |
| 0.7848 | 417.0 | 834 | 1.2772 |
| 0.7848 | 418.0 | 836 | 1.2733 |
| 0.7848 | 419.0 | 838 | 1.2535 |
| 0.7848 | 420.0 | 840 | 1.2329 |
| 0.7848 | 421.0 | 842 | 1.2142 |
| 0.7848 | 422.0 | 844 | 1.2034 |
| 0.7848 | 423.0 | 846 | 1.1952 |
| 0.7848 | 424.0 | 848 | 1.1934 |
| 0.7848 | 425.0 | 850 | 1.1919 |
| 0.7848 | 426.0 | 852 | 1.2076 |
| 0.7848 | 427.0 | 854 | 1.2315 |
| 0.7848 | 428.0 | 856 | 1.2548 |
| 0.7848 | 429.0 | 858 | 1.2658 |
| 0.7848 | 430.0 | 860 | 1.2788 |
| 0.7848 | 431.0 | 862 | 1.3217 |
| 0.7848 | 432.0 | 864 | 1.3605 |
| 0.7848 | 433.0 | 866 | 1.3932 |
| 0.7848 | 434.0 | 868 | 1.3879 |
| 0.7848 | 435.0 | 870 | 1.3466 |
| 0.7848 | 436.0 | 872 | 1.3641 |
| 0.7848 | 437.0 | 874 | 1.3857 |
| 0.7848 | 438.0 | 876 | 1.3715 |
| 0.7848 | 439.0 | 878 | 1.3418 |
| 0.7848 | 440.0 | 880 | 1.3074 |
| 0.7848 | 441.0 | 882 | 1.2860 |
| 0.7848 | 442.0 | 884 | 1.2784 |
| 0.7848 | 443.0 | 886 | 1.2717 |
| 0.7848 | 444.0 | 888 | 1.2610 |
| 0.7848 | 445.0 | 890 | 1.2425 |
| 0.7848 | 446.0 | 892 | 1.2241 |
| 0.7848 | 447.0 | 894 | 1.2384 |
| 0.7848 | 448.0 | 896 | 1.2585 |
| 0.7848 | 449.0 | 898 | 1.3208 |
| 0.7848 | 450.0 | 900 | 1.3714 |
| 0.7848 | 451.0 | 902 | 1.3879 |
| 0.7848 | 452.0 | 904 | 1.3987 |
| 0.7848 | 453.0 | 906 | 1.3883 |
| 0.7848 | 454.0 | 908 | 1.3654 |
| 0.7848 | 455.0 | 910 | 1.3509 |
| 0.7848 | 456.0 | 912 | 1.3285 |
| 0.7848 | 457.0 | 914 | 1.2983 |
| 0.7848 | 458.0 | 916 | 1.2799 |
| 0.7848 | 459.0 | 918 | 1.2651 |
| 0.7848 | 460.0 | 920 | 1.2546 |
| 0.7848 | 461.0 | 922 | 1.2518 |
| 0.7848 | 462.0 | 924 | 1.2571 |
| 0.7848 | 463.0 | 926 | 1.2691 |
| 0.7848 | 464.0 | 928 | 1.2792 |
| 0.7848 | 465.0 | 930 | 1.2884 |
| 0.7848 | 466.0 | 932 | 1.2971 |
| 0.7848 | 467.0 | 934 | 1.3052 |
| 0.7848 | 468.0 | 936 | 1.3093 |
| 0.7848 | 469.0 | 938 | 1.3341 |
| 0.7848 | 470.0 | 940 | 1.3468 |
| 0.7848 | 471.0 | 942 | 1.3557 |
| 0.7848 | 472.0 | 944 | 1.3655 |
| 0.7848 | 473.0 | 946 | 1.3381 |
| 0.7848 | 474.0 | 948 | 1.2787 |
| 0.7848 | 475.0 | 950 | 1.2582 |
| 0.7848 | 476.0 | 952 | 1.2494 |
| 0.7848 | 477.0 | 954 | 1.2374 |
| 0.7848 | 478.0 | 956 | 1.2299 |
| 0.7848 | 479.0 | 958 | 1.2267 |
| 0.7848 | 480.0 | 960 | 1.2277 |
| 0.7848 | 481.0 | 962 | 1.2307 |
| 0.7848 | 482.0 | 964 | 1.2656 |
| 0.7848 | 483.0 | 966 | 1.3019 |
| 0.7848 | 484.0 | 968 | 1.3404 |
| 0.7848 | 485.0 | 970 | 1.3731 |
| 0.7848 | 486.0 | 972 | 1.3912 |
| 0.7848 | 487.0 | 974 | 1.4026 |
| 0.7848 | 488.0 | 976 | 1.4094 |
| 0.7848 | 489.0 | 978 | 1.4133 |
| 0.7848 | 490.0 | 980 | 1.4111 |
| 0.7848 | 491.0 | 982 | 1.4091 |
| 0.7848 | 492.0 | 984 | 1.4110 |
| 0.7848 | 493.0 | 986 | 1.4083 |
| 0.7848 | 494.0 | 988 | 1.4087 |
| 0.7848 | 495.0 | 990 | 1.4063 |
| 0.7848 | 496.0 | 992 | 1.4165 |
| 0.7848 | 497.0 | 994 | 1.4238 |
| 0.7848 | 498.0 | 996 | 1.4307 |
| 0.7848 | 499.0 | 998 | 1.4352 |
| 0.4799 | 500.0 | 1000 | 1.4343 |
| 0.4799 | 501.0 | 1002 | 1.4233 |
| 0.4799 | 502.0 | 1004 | 1.4097 |
| 0.4799 | 503.0 | 1006 | 1.3987 |
| 0.4799 | 504.0 | 1008 | 1.3914 |
| 0.4799 | 505.0 | 1010 | 1.3861 |
| 0.4799 | 506.0 | 1012 | 1.3807 |
| 0.4799 | 507.0 | 1014 | 1.3687 |
| 0.4799 | 508.0 | 1016 | 1.3523 |
| 0.4799 | 509.0 | 1018 | 1.3331 |
| 0.4799 | 510.0 | 1020 | 1.3235 |
| 0.4799 | 511.0 | 1022 | 1.3246 |
| 0.4799 | 512.0 | 1024 | 1.3251 |
| 0.4799 | 513.0 | 1026 | 1.3245 |
| 0.4799 | 514.0 | 1028 | 1.3233 |
| 0.4799 | 515.0 | 1030 | 1.3164 |
| 0.4799 | 516.0 | 1032 | 1.3120 |
| 0.4799 | 517.0 | 1034 | 1.3099 |
| 0.4799 | 518.0 | 1036 | 1.3106 |
| 0.4799 | 519.0 | 1038 | 1.3121 |
| 0.4799 | 520.0 | 1040 | 1.3117 |
| 0.4799 | 521.0 | 1042 | 1.3100 |
| 0.4799 | 522.0 | 1044 | 1.3111 |
| 0.4799 | 523.0 | 1046 | 1.3328 |
| 0.4799 | 524.0 | 1048 | 1.3597 |
| 0.4799 | 525.0 | 1050 | 1.3813 |
| 0.4799 | 526.0 | 1052 | 1.3990 |
| 0.4799 | 527.0 | 1054 | 1.4123 |
| 0.4799 | 528.0 | 1056 | 1.4261 |
| 0.4799 | 529.0 | 1058 | 1.4358 |
| 0.4799 | 530.0 | 1060 | 1.4410 |
| 0.4799 | 531.0 | 1062 | 1.4403 |
| 0.4799 | 532.0 | 1064 | 1.4372 |
| 0.4799 | 533.0 | 1066 | 1.4225 |
| 0.4799 | 534.0 | 1068 | 1.4037 |
| 0.4799 | 535.0 | 1070 | 1.3855 |
| 0.4799 | 536.0 | 1072 | 1.3694 |
| 0.4799 | 537.0 | 1074 | 1.3519 |
| 0.4799 | 538.0 | 1076 | 1.3417 |
| 0.4799 | 539.0 | 1078 | 1.3329 |
| 0.4799 | 540.0 | 1080 | 1.3248 |
| 0.4799 | 541.0 | 1082 | 1.3152 |
| 0.4799 | 542.0 | 1084 | 1.3113 |
| 0.4799 | 543.0 | 1086 | 1.3064 |
| 0.4799 | 544.0 | 1088 | 1.3041 |
| 0.4799 | 545.0 | 1090 | 1.3012 |
| 0.4799 | 546.0 | 1092 | 1.3057 |
| 0.4799 | 547.0 | 1094 | 1.3255 |
| 0.4799 | 548.0 | 1096 | 1.3440 |
| 0.4799 | 549.0 | 1098 | 1.3639 |
| 0.4799 | 550.0 | 1100 | 1.3943 |
| 0.4799 | 551.0 | 1102 | 1.4579 |
| 0.4799 | 552.0 | 1104 | 1.5003 |
| 0.4799 | 553.0 | 1106 | 1.5229 |
| 0.4799 | 554.0 | 1108 | 1.5363 |
| 0.4799 | 555.0 | 1110 | 1.5412 |
| 0.4799 | 556.0 | 1112 | 1.5620 |
| 0.4799 | 557.0 | 1114 | 1.5717 |
| 0.4799 | 558.0 | 1116 | 1.5764 |
| 0.4799 | 559.0 | 1118 | 1.5700 |
| 0.4799 | 560.0 | 1120 | 1.5607 |
| 0.4799 | 561.0 | 1122 | 1.5492 |
| 0.4799 | 562.0 | 1124 | 1.5384 |
| 0.4799 | 563.0 | 1126 | 1.5219 |
| 0.4799 | 564.0 | 1128 | 1.5070 |
| 0.4799 | 565.0 | 1130 | 1.4930 |
| 0.4799 | 566.0 | 1132 | 1.4822 |
| 0.4799 | 567.0 | 1134 | 1.4685 |
| 0.4799 | 568.0 | 1136 | 1.4568 |
| 0.4799 | 569.0 | 1138 | 1.4585 |
| 0.4799 | 570.0 | 1140 | 1.4424 |
| 0.4799 | 571.0 | 1142 | 1.4010 |
| 0.4799 | 572.0 | 1144 | 1.3688 |
| 0.4799 | 573.0 | 1146 | 1.3573 |
| 0.4799 | 574.0 | 1148 | 1.3528 |
| 0.4799 | 575.0 | 1150 | 1.3519 |
| 0.4799 | 576.0 | 1152 | 1.3527 |
| 0.4799 | 577.0 | 1154 | 1.3493 |
| 0.4799 | 578.0 | 1156 | 1.3456 |
| 0.4799 | 579.0 | 1158 | 1.3396 |
| 0.4799 | 580.0 | 1160 | 1.3285 |
| 0.4799 | 581.0 | 1162 | 1.3217 |
| 0.4799 | 582.0 | 1164 | 1.3149 |
| 0.4799 | 583.0 | 1166 | 1.3102 |
| 0.4799 | 584.0 | 1168 | 1.3067 |
| 0.4799 | 585.0 | 1170 | 1.3053 |
| 0.4799 | 586.0 | 1172 | 1.3026 |
| 0.4799 | 587.0 | 1174 | 1.3002 |
| 0.4799 | 588.0 | 1176 | 1.2997 |
| 0.4799 | 589.0 | 1178 | 1.3007 |
| 0.4799 | 590.0 | 1180 | 1.2987 |
| 0.4799 | 591.0 | 1182 | 1.2945 |
| 0.4799 | 592.0 | 1184 | 1.2892 |
| 0.4799 | 593.0 | 1186 | 1.2837 |
| 0.4799 | 594.0 | 1188 | 1.2824 |
| 0.4799 | 595.0 | 1190 | 1.2879 |
| 0.4799 | 596.0 | 1192 | 1.2945 |
| 0.4799 | 597.0 | 1194 | 1.3013 |
| 0.4799 | 598.0 | 1196 | 1.3057 |
| 0.4799 | 599.0 | 1198 | 1.3086 |
| 0.4799 | 600.0 | 1200 | 1.3172 |
| 0.4799 | 601.0 | 1202 | 1.3301 |
| 0.4799 | 602.0 | 1204 | 1.3395 |
| 0.4799 | 603.0 | 1206 | 1.3458 |
| 0.4799 | 604.0 | 1208 | 1.3459 |
| 0.4799 | 605.0 | 1210 | 1.3400 |
| 0.4799 | 606.0 | 1212 | 1.3242 |
| 0.4799 | 607.0 | 1214 | 1.3115 |
| 0.4799 | 608.0 | 1216 | 1.3021 |
| 0.4799 | 609.0 | 1218 | 1.3064 |
| 0.4799 | 610.0 | 1220 | 1.3123 |
| 0.4799 | 611.0 | 1222 | 1.3143 |
| 0.4799 | 612.0 | 1224 | 1.3082 |
| 0.4799 | 613.0 | 1226 | 1.2928 |
| 0.4799 | 614.0 | 1228 | 1.2830 |
| 0.4799 | 615.0 | 1230 | 1.2713 |
| 0.4799 | 616.0 | 1232 | 1.2756 |
| 0.4799 | 617.0 | 1234 | 1.2929 |
| 0.4799 | 618.0 | 1236 | 1.3059 |
| 0.4799 | 619.0 | 1238 | 1.3025 |
| 0.4799 | 620.0 | 1240 | 1.2950 |
| 0.4799 | 621.0 | 1242 | 1.3077 |
| 0.4799 | 622.0 | 1244 | 1.3434 |
| 0.4799 | 623.0 | 1246 | 1.3743 |
| 0.4799 | 624.0 | 1248 | 1.4028 |
| 0.4799 | 625.0 | 1250 | 1.4247 |
| 0.4799 | 626.0 | 1252 | 1.4421 |
| 0.4799 | 627.0 | 1254 | 1.4513 |
| 0.4799 | 628.0 | 1256 | 1.4576 |
| 0.4799 | 629.0 | 1258 | 1.4610 |
| 0.4799 | 630.0 | 1260 | 1.4641 |
| 0.4799 | 631.0 | 1262 | 1.4660 |
| 0.4799 | 632.0 | 1264 | 1.4640 |
| 0.4799 | 633.0 | 1266 | 1.4627 |
| 0.4799 | 634.0 | 1268 | 1.4628 |
| 0.4799 | 635.0 | 1270 | 1.4645 |
| 0.4799 | 636.0 | 1272 | 1.4792 |
| 0.4799 | 637.0 | 1274 | 1.4911 |
| 0.4799 | 638.0 | 1276 | 1.4977 |
| 0.4799 | 639.0 | 1278 | 1.5028 |
| 0.4799 | 640.0 | 1280 | 1.5062 |
| 0.4799 | 641.0 | 1282 | 1.5110 |
| 0.4799 | 642.0 | 1284 | 1.5143 |
| 0.4799 | 643.0 | 1286 | 1.5149 |
| 0.4799 | 644.0 | 1288 | 1.5138 |
| 0.4799 | 645.0 | 1290 | 1.5102 |
| 0.4799 | 646.0 | 1292 | 1.5074 |
| 0.4799 | 647.0 | 1294 | 1.5026 |
| 0.4799 | 648.0 | 1296 | 1.4990 |
| 0.4799 | 649.0 | 1298 | 1.4974 |
| 0.4799 | 650.0 | 1300 | 1.4953 |
| 0.4799 | 651.0 | 1302 | 1.4932 |
| 0.4799 | 652.0 | 1304 | 1.4911 |
| 0.4799 | 653.0 | 1306 | 1.4916 |
| 0.4799 | 654.0 | 1308 | 1.4895 |
| 0.4799 | 655.0 | 1310 | 1.4865 |
| 0.4799 | 656.0 | 1312 | 1.4734 |
| 0.4799 | 657.0 | 1314 | 1.4608 |
| 0.4799 | 658.0 | 1316 | 1.4476 |
| 0.4799 | 659.0 | 1318 | 1.4363 |
| 0.4799 | 660.0 | 1320 | 1.4228 |
| 0.4799 | 661.0 | 1322 | 1.4101 |
| 0.4799 | 662.0 | 1324 | 1.3990 |
| 0.4799 | 663.0 | 1326 | 1.3882 |
| 0.4799 | 664.0 | 1328 | 1.3800 |
| 0.4799 | 665.0 | 1330 | 1.3741 |
| 0.4799 | 666.0 | 1332 | 1.3672 |
| 0.4799 | 667.0 | 1334 | 1.3610 |
| 0.4799 | 668.0 | 1336 | 1.3487 |
| 0.4799 | 669.0 | 1338 | 1.3423 |
| 0.4799 | 670.0 | 1340 | 1.3364 |
| 0.4799 | 671.0 | 1342 | 1.3337 |
| 0.4799 | 672.0 | 1344 | 1.3294 |
| 0.4799 | 673.0 | 1346 | 1.3256 |
| 0.4799 | 674.0 | 1348 | 1.3313 |
| 0.4799 | 675.0 | 1350 | 1.3476 |
| 0.4799 | 676.0 | 1352 | 1.3727 |
| 0.4799 | 677.0 | 1354 | 1.3927 |
| 0.4799 | 678.0 | 1356 | 1.4058 |
| 0.4799 | 679.0 | 1358 | 1.4123 |
| 0.4799 | 680.0 | 1360 | 1.4159 |
| 0.4799 | 681.0 | 1362 | 1.4177 |
| 0.4799 | 682.0 | 1364 | 1.4187 |
| 0.4799 | 683.0 | 1366 | 1.4204 |
| 0.4799 | 684.0 | 1368 | 1.4205 |
| 0.4799 | 685.0 | 1370 | 1.4190 |
| 0.4799 | 686.0 | 1372 | 1.4192 |
| 0.4799 | 687.0 | 1374 | 1.4212 |
| 0.4799 | 688.0 | 1376 | 1.4247 |
| 0.4799 | 689.0 | 1378 | 1.4259 |
| 0.4799 | 690.0 | 1380 | 1.4276 |
| 0.4799 | 691.0 | 1382 | 1.4273 |
| 0.4799 | 692.0 | 1384 | 1.4233 |
| 0.4799 | 693.0 | 1386 | 1.4206 |
| 0.4799 | 694.0 | 1388 | 1.4163 |
| 0.4799 | 695.0 | 1390 | 1.4118 |
| 0.4799 | 696.0 | 1392 | 1.4003 |
| 0.4799 | 697.0 | 1394 | 1.3824 |
| 0.4799 | 698.0 | 1396 | 1.3642 |
| 0.4799 | 699.0 | 1398 | 1.3474 |
| 0.4799 | 700.0 | 1400 | 1.3300 |
| 0.4799 | 701.0 | 1402 | 1.3253 |
| 0.4799 | 702.0 | 1404 | 1.3313 |
| 0.4799 | 703.0 | 1406 | 1.3416 |
| 0.4799 | 704.0 | 1408 | 1.3519 |
| 0.4799 | 705.0 | 1410 | 1.3577 |
| 0.4799 | 706.0 | 1412 | 1.3560 |
| 0.4799 | 707.0 | 1414 | 1.3507 |
| 0.4799 | 708.0 | 1416 | 1.3441 |
| 0.4799 | 709.0 | 1418 | 1.3338 |
| 0.4799 | 710.0 | 1420 | 1.3195 |
| 0.4799 | 711.0 | 1422 | 1.3074 |
| 0.4799 | 712.0 | 1424 | 1.3004 |
| 0.4799 | 713.0 | 1426 | 1.2970 |
| 0.4799 | 714.0 | 1428 | 1.2896 |
| 0.4799 | 715.0 | 1430 | 1.2801 |
| 0.4799 | 716.0 | 1432 | 1.2716 |
| 0.4799 | 717.0 | 1434 | 1.2596 |
| 0.4799 | 718.0 | 1436 | 1.2538 |
| 0.4799 | 719.0 | 1438 | 1.2512 |
| 0.4799 | 720.0 | 1440 | 1.2486 |
| 0.4799 | 721.0 | 1442 | 1.2474 |
| 0.4799 | 722.0 | 1444 | 1.2474 |
| 0.4799 | 723.0 | 1446 | 1.2469 |
| 0.4799 | 724.0 | 1448 | 1.2449 |
| 0.4799 | 725.0 | 1450 | 1.2449 |
| 0.4799 | 726.0 | 1452 | 1.2451 |
| 0.4799 | 727.0 | 1454 | 1.2441 |
| 0.4799 | 728.0 | 1456 | 1.2423 |
| 0.4799 | 729.0 | 1458 | 1.2419 |
| 0.4799 | 730.0 | 1460 | 1.2449 |
| 0.4799 | 731.0 | 1462 | 1.2471 |
| 0.4799 | 732.0 | 1464 | 1.2458 |
| 0.4799 | 733.0 | 1466 | 1.2464 |
| 0.4799 | 734.0 | 1468 | 1.2785 |
| 0.4799 | 735.0 | 1470 | 1.3207 |
| 0.4799 | 736.0 | 1472 | 1.3715 |
| 0.4799 | 737.0 | 1474 | 1.4169 |
| 0.4799 | 738.0 | 1476 | 1.4563 |
| 0.4799 | 739.0 | 1478 | 1.4869 |
| 0.4799 | 740.0 | 1480 | 1.5167 |
| 0.4799 | 741.0 | 1482 | 1.5436 |
| 0.4799 | 742.0 | 1484 | 1.5702 |
| 0.4799 | 743.0 | 1486 | 1.5851 |
| 0.4799 | 744.0 | 1488 | 1.5931 |
| 0.4799 | 745.0 | 1490 | 1.5952 |
| 0.4799 | 746.0 | 1492 | 1.5952 |
| 0.4799 | 747.0 | 1494 | 1.5880 |
| 0.4799 | 748.0 | 1496 | 1.5760 |
| 0.4799 | 749.0 | 1498 | 1.5652 |
| 0.4783 | 750.0 | 1500 | 1.5567 |
| 0.4783 | 751.0 | 1502 | 1.5484 |
| 0.4783 | 752.0 | 1504 | 1.5421 |
| 0.4783 | 753.0 | 1506 | 1.5332 |
| 0.4783 | 754.0 | 1508 | 1.5258 |
| 0.4783 | 755.0 | 1510 | 1.5244 |
| 0.4783 | 756.0 | 1512 | 1.5211 |
| 0.4783 | 757.0 | 1514 | 1.5106 |
| 0.4783 | 758.0 | 1516 | 1.5022 |
| 0.4783 | 759.0 | 1518 | 1.4976 |
| 0.4783 | 760.0 | 1520 | 1.5017 |
| 0.4783 | 761.0 | 1522 | 1.5078 |
| 0.4783 | 762.0 | 1524 | 1.5087 |
| 0.4783 | 763.0 | 1526 | 1.5105 |
| 0.4783 | 764.0 | 1528 | 1.5117 |
| 0.4783 | 765.0 | 1530 | 1.5050 |
| 0.4783 | 766.0 | 1532 | 1.5032 |
| 0.4783 | 767.0 | 1534 | 1.5026 |
| 0.4783 | 768.0 | 1536 | 1.5017 |
| 0.4783 | 769.0 | 1538 | 1.5065 |
| 0.4783 | 770.0 | 1540 | 1.5154 |
| 0.4783 | 771.0 | 1542 | 1.5251 |
| 0.4783 | 772.0 | 1544 | 1.5300 |
| 0.4783 | 773.0 | 1546 | 1.5311 |
| 0.4783 | 774.0 | 1548 | 1.5293 |
| 0.4783 | 775.0 | 1550 | 1.5223 |
| 0.4783 | 776.0 | 1552 | 1.5192 |
| 0.4783 | 777.0 | 1554 | 1.5206 |
| 0.4783 | 778.0 | 1556 | 1.5233 |
| 0.4783 | 779.0 | 1558 | 1.5283 |
| 0.4783 | 780.0 | 1560 | 1.5332 |
| 0.4783 | 781.0 | 1562 | 1.5299 |
| 0.4783 | 782.0 | 1564 | 1.5230 |
| 0.4783 | 783.0 | 1566 | 1.5173 |
| 0.4783 | 784.0 | 1568 | 1.5078 |
| 0.4783 | 785.0 | 1570 | 1.4983 |
| 0.4783 | 786.0 | 1572 | 1.4891 |
| 0.4783 | 787.0 | 1574 | 1.4814 |
| 0.4783 | 788.0 | 1576 | 1.4752 |
| 0.4783 | 789.0 | 1578 | 1.4733 |
| 0.4783 | 790.0 | 1580 | 1.4810 |
| 0.4783 | 791.0 | 1582 | 1.4864 |
| 0.4783 | 792.0 | 1584 | 1.4891 |
| 0.4783 | 793.0 | 1586 | 1.4871 |
| 0.4783 | 794.0 | 1588 | 1.4864 |
| 0.4783 | 795.0 | 1590 | 1.4846 |
| 0.4783 | 796.0 | 1592 | 1.4813 |
| 0.4783 | 797.0 | 1594 | 1.4784 |
| 0.4783 | 798.0 | 1596 | 1.4754 |
| 0.4783 | 799.0 | 1598 | 1.4725 |
| 0.4783 | 800.0 | 1600 | 1.4684 |
| 0.4783 | 801.0 | 1602 | 1.4653 |
| 0.4783 | 802.0 | 1604 | 1.4570 |
| 0.4783 | 803.0 | 1606 | 1.4437 |
| 0.4783 | 804.0 | 1608 | 1.4326 |
| 0.4783 | 805.0 | 1610 | 1.4253 |
| 0.4783 | 806.0 | 1612 | 1.4183 |
| 0.4783 | 807.0 | 1614 | 1.4131 |
| 0.4783 | 808.0 | 1616 | 1.4044 |
| 0.4783 | 809.0 | 1618 | 1.3940 |
| 0.4783 | 810.0 | 1620 | 1.3876 |
| 0.4783 | 811.0 | 1622 | 1.3929 |
| 0.4783 | 812.0 | 1624 | 1.3970 |
| 0.4783 | 813.0 | 1626 | 1.4008 |
| 0.4783 | 814.0 | 1628 | 1.4023 |
| 0.4783 | 815.0 | 1630 | 1.4080 |
| 0.4783 | 816.0 | 1632 | 1.4098 |
| 0.4783 | 817.0 | 1634 | 1.4080 |
| 0.4783 | 818.0 | 1636 | 1.4124 |
| 0.4783 | 819.0 | 1638 | 1.4114 |
| 0.4783 | 820.0 | 1640 | 1.4106 |
| 0.4783 | 821.0 | 1642 | 1.4061 |
| 0.4783 | 822.0 | 1644 | 1.4033 |
| 0.4783 | 823.0 | 1646 | 1.4018 |
| 0.4783 | 824.0 | 1648 | 1.3968 |
| 0.4783 | 825.0 | 1650 | 1.3924 |
| 0.4783 | 826.0 | 1652 | 1.3878 |
| 0.4783 | 827.0 | 1654 | 1.3867 |
| 0.4783 | 828.0 | 1656 | 1.3847 |
| 0.4783 | 829.0 | 1658 | 1.3812 |
| 0.4783 | 830.0 | 1660 | 1.3841 |
| 0.4783 | 831.0 | 1662 | 1.3840 |
| 0.4783 | 832.0 | 1664 | 1.3869 |
| 0.4783 | 833.0 | 1666 | 1.3893 |
| 0.4783 | 834.0 | 1668 | 1.3902 |
| 0.4783 | 835.0 | 1670 | 1.3901 |
| 0.4783 | 836.0 | 1672 | 1.3927 |
| 0.4783 | 837.0 | 1674 | 1.3992 |
| 0.4783 | 838.0 | 1676 | 1.4043 |
| 0.4783 | 839.0 | 1678 | 1.4087 |
| 0.4783 | 840.0 | 1680 | 1.4168 |
| 0.4783 | 841.0 | 1682 | 1.4221 |
| 0.4783 | 842.0 | 1684 | 1.4275 |
| 0.4783 | 843.0 | 1686 | 1.4309 |
| 0.4783 | 844.0 | 1688 | 1.4353 |
| 0.4783 | 845.0 | 1690 | 1.4388 |
| 0.4783 | 846.0 | 1692 | 1.4389 |
| 0.4783 | 847.0 | 1694 | 1.4364 |
| 0.4783 | 848.0 | 1696 | 1.4346 |
| 0.4783 | 849.0 | 1698 | 1.4334 |
| 0.4783 | 850.0 | 1700 | 1.4328 |
| 0.4783 | 851.0 | 1702 | 1.4328 |
| 0.4783 | 852.0 | 1704 | 1.4321 |
| 0.4783 | 853.0 | 1706 | 1.4277 |
| 0.4783 | 854.0 | 1708 | 1.4242 |
| 0.4783 | 855.0 | 1710 | 1.4211 |
| 0.4783 | 856.0 | 1712 | 1.4173 |
| 0.4783 | 857.0 | 1714 | 1.4133 |
| 0.4783 | 858.0 | 1716 | 1.4071 |
| 0.4783 | 859.0 | 1718 | 1.4056 |
| 0.4783 | 860.0 | 1720 | 1.4061 |
| 0.4783 | 861.0 | 1722 | 1.4074 |
| 0.4783 | 862.0 | 1724 | 1.4107 |
| 0.4783 | 863.0 | 1726 | 1.4168 |
| 0.4783 | 864.0 | 1728 | 1.4202 |
| 0.4783 | 865.0 | 1730 | 1.4238 |
| 0.4783 | 866.0 | 1732 | 1.4290 |
| 0.4783 | 867.0 | 1734 | 1.4301 |
| 0.4783 | 868.0 | 1736 | 1.4320 |
| 0.4783 | 869.0 | 1738 | 1.4326 |
| 0.4783 | 870.0 | 1740 | 1.4325 |
| 0.4783 | 871.0 | 1742 | 1.4312 |
| 0.4783 | 872.0 | 1744 | 1.4294 |
| 0.4783 | 873.0 | 1746 | 1.4266 |
| 0.4783 | 874.0 | 1748 | 1.4225 |
| 0.4783 | 875.0 | 1750 | 1.4188 |
| 0.4783 | 876.0 | 1752 | 1.4138 |
| 0.4783 | 877.0 | 1754 | 1.4060 |
| 0.4783 | 878.0 | 1756 | 1.3991 |
| 0.4783 | 879.0 | 1758 | 1.3921 |
| 0.4783 | 880.0 | 1760 | 1.3856 |
| 0.4783 | 881.0 | 1762 | 1.3814 |
| 0.4783 | 882.0 | 1764 | 1.3789 |
| 0.4783 | 883.0 | 1766 | 1.3773 |
| 0.4783 | 884.0 | 1768 | 1.3760 |
| 0.4783 | 885.0 | 1770 | 1.3746 |
| 0.4783 | 886.0 | 1772 | 1.3738 |
| 0.4783 | 887.0 | 1774 | 1.3730 |
| 0.4783 | 888.0 | 1776 | 1.3726 |
| 0.4783 | 889.0 | 1778 | 1.3716 |
| 0.4783 | 890.0 | 1780 | 1.3694 |
| 0.4783 | 891.0 | 1782 | 1.3650 |
| 0.4783 | 892.0 | 1784 | 1.3603 |
| 0.4783 | 893.0 | 1786 | 1.3550 |
| 0.4783 | 894.0 | 1788 | 1.3529 |
| 0.4783 | 895.0 | 1790 | 1.3525 |
| 0.4783 | 896.0 | 1792 | 1.3511 |
| 0.4783 | 897.0 | 1794 | 1.3507 |
| 0.4783 | 898.0 | 1796 | 1.3488 |
| 0.4783 | 899.0 | 1798 | 1.3484 |
| 0.4783 | 900.0 | 1800 | 1.3473 |
| 0.4783 | 901.0 | 1802 | 1.3507 |
| 0.4783 | 902.0 | 1804 | 1.3555 |
| 0.4783 | 903.0 | 1806 | 1.3616 |
| 0.4783 | 904.0 | 1808 | 1.3682 |
| 0.4783 | 905.0 | 1810 | 1.3711 |
| 0.4783 | 906.0 | 1812 | 1.3737 |
| 0.4783 | 907.0 | 1814 | 1.3745 |
| 0.4783 | 908.0 | 1816 | 1.3762 |
| 0.4783 | 909.0 | 1818 | 1.3768 |
| 0.4783 | 910.0 | 1820 | 1.3749 |
| 0.4783 | 911.0 | 1822 | 1.3727 |
| 0.4783 | 912.0 | 1824 | 1.3705 |
| 0.4783 | 913.0 | 1826 | 1.3714 |
| 0.4783 | 914.0 | 1828 | 1.3751 |
| 0.4783 | 915.0 | 1830 | 1.3775 |
| 0.4783 | 916.0 | 1832 | 1.3784 |
| 0.4783 | 917.0 | 1834 | 1.3785 |
| 0.4783 | 918.0 | 1836 | 1.3817 |
| 0.4783 | 919.0 | 1838 | 1.3845 |
| 0.4783 | 920.0 | 1840 | 1.3866 |
| 0.4783 | 921.0 | 1842 | 1.3899 |
| 0.4783 | 922.0 | 1844 | 1.3908 |
| 0.4783 | 923.0 | 1846 | 1.3949 |
| 0.4783 | 924.0 | 1848 | 1.3996 |
| 0.4783 | 925.0 | 1850 | 1.4025 |
| 0.4783 | 926.0 | 1852 | 1.4042 |
| 0.4783 | 927.0 | 1854 | 1.4060 |
| 0.4783 | 928.0 | 1856 | 1.4077 |
| 0.4783 | 929.0 | 1858 | 1.4104 |
| 0.4783 | 930.0 | 1860 | 1.4122 |
| 0.4783 | 931.0 | 1862 | 1.4154 |
| 0.4783 | 932.0 | 1864 | 1.4194 |
| 0.4783 | 933.0 | 1866 | 1.4220 |
| 0.4783 | 934.0 | 1868 | 1.4251 |
| 0.4783 | 935.0 | 1870 | 1.4291 |
| 0.4783 | 936.0 | 1872 | 1.4326 |
| 0.4783 | 937.0 | 1874 | 1.4357 |
| 0.4783 | 938.0 | 1876 | 1.4393 |
| 0.4783 | 939.0 | 1878 | 1.4429 |
| 0.4783 | 940.0 | 1880 | 1.4463 |
| 0.4783 | 941.0 | 1882 | 1.4479 |
| 0.4783 | 942.0 | 1884 | 1.4490 |
| 0.4783 | 943.0 | 1886 | 1.4497 |
| 0.4783 | 944.0 | 1888 | 1.4501 |
| 0.4783 | 945.0 | 1890 | 1.4504 |
| 0.4783 | 946.0 | 1892 | 1.4500 |
| 0.4783 | 947.0 | 1894 | 1.4487 |
| 0.4783 | 948.0 | 1896 | 1.4465 |
| 0.4783 | 949.0 | 1898 | 1.4447 |
| 0.4783 | 950.0 | 1900 | 1.4429 |
| 0.4783 | 951.0 | 1902 | 1.4403 |
| 0.4783 | 952.0 | 1904 | 1.4384 |
| 0.4783 | 953.0 | 1906 | 1.4372 |
| 0.4783 | 954.0 | 1908 | 1.4366 |
| 0.4783 | 955.0 | 1910 | 1.4356 |
| 0.4783 | 956.0 | 1912 | 1.4345 |
| 0.4783 | 957.0 | 1914 | 1.4335 |
| 0.4783 | 958.0 | 1916 | 1.4317 |
| 0.4783 | 959.0 | 1918 | 1.4301 |
| 0.4783 | 960.0 | 1920 | 1.4289 |
| 0.4783 | 961.0 | 1922 | 1.4276 |
| 0.4783 | 962.0 | 1924 | 1.4262 |
| 0.4783 | 963.0 | 1926 | 1.4249 |
| 0.4783 | 964.0 | 1928 | 1.4235 |
| 0.4783 | 965.0 | 1930 | 1.4228 |
| 0.4783 | 966.0 | 1932 | 1.4220 |
| 0.4783 | 967.0 | 1934 | 1.4213 |
| 0.4783 | 968.0 | 1936 | 1.4198 |
| 0.4783 | 969.0 | 1938 | 1.4192 |
| 0.4783 | 970.0 | 1940 | 1.4189 |
| 0.4783 | 971.0 | 1942 | 1.4184 |
| 0.4783 | 972.0 | 1944 | 1.4169 |
| 0.4783 | 973.0 | 1946 | 1.4151 |
| 0.4783 | 974.0 | 1948 | 1.4138 |
| 0.4783 | 975.0 | 1950 | 1.4132 |
| 0.4783 | 976.0 | 1952 | 1.4123 |
| 0.4783 | 977.0 | 1954 | 1.4112 |
| 0.4783 | 978.0 | 1956 | 1.4099 |
| 0.4783 | 979.0 | 1958 | 1.4084 |
| 0.4783 | 980.0 | 1960 | 1.4059 |
| 0.4783 | 981.0 | 1962 | 1.4032 |
| 0.4783 | 982.0 | 1964 | 1.4003 |
| 0.4783 | 983.0 | 1966 | 1.3976 |
| 0.4783 | 984.0 | 1968 | 1.3951 |
| 0.4783 | 985.0 | 1970 | 1.3934 |
| 0.4783 | 986.0 | 1972 | 1.3921 |
| 0.4783 | 987.0 | 1974 | 1.3911 |
| 0.4783 | 988.0 | 1976 | 1.3901 |
| 0.4783 | 989.0 | 1978 | 1.3902 |
| 0.4783 | 990.0 | 1980 | 1.3899 |
| 0.4783 | 991.0 | 1982 | 1.3897 |
| 0.4783 | 992.0 | 1984 | 1.3896 |
| 0.4783 | 993.0 | 1986 | 1.3894 |
| 0.4783 | 994.0 | 1988 | 1.3895 |
| 0.4783 | 995.0 | 1990 | 1.3897 |
| 0.4783 | 996.0 | 1992 | 1.3898 |
| 0.4783 | 997.0 | 1994 | 1.3899 |
| 0.4783 | 998.0 | 1996 | 1.3900 |
| 0.4783 | 999.0 | 1998 | 1.3902 |
| 0.4785 | 1000.0 | 2000 | 1.3902 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
digiplay/BeautifulFantasyRealMix_diffusers
|
digiplay
| 2024-03-07T03:03:50Z | 2,435 | 6 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-05-26T18:18:45Z |
---
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
Model info:
https://civitai.com/models/66309/beautifulfantasyrealmix
file name: beautifulfantasyreal_v10.safetensors
Original Author's DEMO image:

|
jsfs11/testSLERPmerge
|
jsfs11
| 2024-03-07T02:56:23Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"jsfs11/testmodelformergev1",
"BioMistral/BioMistral-7B",
"conversational",
"base_model:BioMistral/BioMistral-7B",
"base_model:merge:BioMistral/BioMistral-7B",
"base_model:jsfs11/testmodelformergev1",
"base_model:merge:jsfs11/testmodelformergev1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-20T06:28:06Z |
---
tags:
- merge
- mergekit
- lazymergekit
- jsfs11/testmodelformergev1
- BioMistral/BioMistral-7B
base_model:
- jsfs11/testmodelformergev1
- BioMistral/BioMistral-7B
---
# testSLERPmerge
testSLERPmerge is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [jsfs11/testmodelformergev1](https://huggingface.co/jsfs11/testmodelformergev1)
* [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: jsfs11/testmodelformergev1
layer_range: [0, 32]
- model: BioMistral/BioMistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: BioMistral/BioMistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "jsfs11/testSLERPmerge"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
madroid/qwen1.5-0.5B-4bit-flow
|
madroid
| 2024-03-07T02:56:12Z | 4 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"en",
"license:other",
"region:us"
] |
text-generation
| 2024-03-07T02:55:25Z |
---
language:
- en
license: other
tags:
- chat
- mlx
- mlx
license_name: tongyi-qianwen-research
license_link: https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/blob/main/LICENSE
pipeline_tag: text-generation
---
# madroid/qwen1.5-0.5B-4bit-flow
This model was converted to MLX format from [`mlx-community/Qwen1.5-0.5B-Chat-4bit`]().
Refer to the [original model card](https://huggingface.co/mlx-community/Qwen1.5-0.5B-Chat-4bit) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("madroid/qwen1.5-0.5B-4bit-flow")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
OwOOwO/eacc_bm_old_rt
|
OwOOwO
| 2024-03-07T02:49:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T01:02:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
humung/Ko-PlatYi-6B-vlending-cs-v0.2
|
humung
| 2024-03-07T02:49:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T01:43:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jerrish/bert-finetuned-ner
|
jerrish
| 2024-03-07T02:48:30Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-07T02:34:53Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0588
- Precision: 0.9358
- Recall: 0.9520
- F1: 0.9439
- Accuracy: 0.9867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0759 | 1.0 | 1756 | 0.0684 | 0.9103 | 0.9339 | 0.9219 | 0.9815 |
| 0.0355 | 2.0 | 3512 | 0.0647 | 0.9373 | 0.9490 | 0.9431 | 0.9859 |
| 0.0239 | 3.0 | 5268 | 0.0588 | 0.9358 | 0.9520 | 0.9439 | 0.9867 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
vaicai/kaifa-l2-adapters-v0.13.1.base
|
vaicai
| 2024-03-07T02:37:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T02:37:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Plaban81/gemma-medical_qa-Finetune
|
Plaban81
| 2024-03-07T02:35:33Z | 13 | 1 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T01:39:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dyang415/nohto-v0-10e
|
dyang415
| 2024-03-07T02:31:31Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T02:24:50Z |
---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: nohto-v0-10e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: AutoModelForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: inst
datasets:
- path: ./data/nohto/training.jsonl
type: sharegpt
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ../nohto-v0-10e
adapter: lora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
eval_sample_packing: false
hub_model_id: dyang415/nohto-v0-10e
wandb_project: nohto
wandb_name: nohto-v0
wandb_log_model: end
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 10
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps: 0.2
save_steps: 0.1
eval_max_new_tokens: 128
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# nohto-v0-10e
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7166 | 0.18 | 1 | 3.7658 |
| 0.5158 | 1.64 | 10 | 0.5278 |
| 0.2492 | 3.09 | 20 | 0.5739 |
| 0.0338 | 4.73 | 30 | 0.7476 |
| 0.0083 | 6.36 | 40 | 0.8089 |
| 0.0078 | 8.0 | 50 | 0.8229 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.1
- Tokenizers 0.15.0
|
namnh2002/model_timesformer_subset_02
|
namnh2002
| 2024-03-07T02:31:03Z | 24 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"timesformer",
"video-classification",
"generated_from_trainer",
"base_model:namnh2002/model_timesformer_subset_02",
"base_model:finetune:namnh2002/model_timesformer_subset_02",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-03-04T11:02:35Z |
---
license: cc-by-nc-4.0
base_model: namnh2002/model_timesformer_subset_02
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_timesformer_subset_02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_timesformer_subset_02
This model is a fine-tuned version of [namnh2002/model_timesformer_subset_02](https://huggingface.co/namnh2002/model_timesformer_subset_02) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4130
- Accuracy: 0.8852
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 6250
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9368 | 0.1 | 625 | 1.8302 | 0.5026 |
| 0.9936 | 1.1 | 1250 | 1.3368 | 0.6081 |
| 0.9407 | 2.1 | 1875 | 1.1348 | 0.6794 |
| 0.8338 | 3.1 | 2500 | 0.9604 | 0.7270 |
| 0.629 | 4.1 | 3125 | 0.7775 | 0.7684 |
| 0.4094 | 5.1 | 3750 | 0.6939 | 0.8056 |
| 0.398 | 6.1 | 4375 | 0.5883 | 0.8366 |
| 0.3242 | 7.1 | 5000 | 0.4594 | 0.8707 |
| 0.2768 | 8.1 | 5625 | 0.5158 | 0.8604 |
| 0.2571 | 9.1 | 6250 | 0.4130 | 0.8852 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
toiladolehuy/blue
|
toiladolehuy
| 2024-03-07T02:28:22Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-06T04:19:33Z |
---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
model-index:
- name: blue
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# blue
This model is a fine-tuned version of [nguyenvulebinh/wav2vec2-base-vietnamese-250h](https://huggingface.co/nguyenvulebinh/wav2vec2-base-vietnamese-250h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5902
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 13.7381 | 6.33 | 500 | 3.5902 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.0+cu121
- Datasets 2.7.1
- Tokenizers 0.15.2
|
dyang415/nohto-v0-1e
|
dyang415
| 2024-03-07T02:25:48Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T02:09:42Z |
---
license: apache-2.0
library_name: peft
tags:
- axolotl
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: nohto-v0-1e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mistral-7B-Instruct-v0.2
model_type: AutoModelForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true
load_in_8bit: false
load_in_4bit: false
strict: false
chat_template: inst
datasets:
- path: ./data/nohto/training.jsonl
type: sharegpt
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ../nohto-v0-1e
adapter: lora
lora_model_dir:
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
lora_r: 16
lora_alpha: 32
lora_dropout: 0.1
lora_target_linear: true
lora_fan_in_fan_out:
eval_sample_packing: false
hub_model_id: dyang415/nohto-v0-1e
wandb_project: nohto
wandb_name: nohto-v0
wandb_log_model: end
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 1
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
eval_steps: 0.2
save_steps: 0.1
eval_max_new_tokens: 128
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# nohto-v0-1e
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7166 | 0.18 | 1 | 3.7658 |
| 2.1253 | 0.36 | 2 | 3.2472 |
| 2.1969 | 0.55 | 3 | 1.8100 |
| 1.0305 | 0.73 | 4 | 1.1527 |
| 0.7511 | 0.91 | 5 | 0.8883 |
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.1
- Tokenizers 0.15.0
|
bajajss/CourseEvalTopicModeling
|
bajajss
| 2024-03-07T02:25:21Z | 3 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-02-25T09:38:48Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# CourseEvalTopicModeling
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("bajajss/CourseEvalTopicModeling")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 28
* Number of training documents: 204
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | 32 - structure - course - 33 - 73 | 2 | Course Structure and Delivery |
| 0 | 70 - 57 - 71 - 10 - 25 | 37 | Teaching and Learning Evaluations |
| 1 | connects - enhance - hands - real - smaller | 28 | Project Implementation and Guidance |
| 2 | 45 - 74 - 65 - thoroughly - sure | 13 | Homework Evaluation |
| 3 | materials - 102 - studying - slides - sections | 11 | Course Materials and Study Strategies |
| 4 | challenging - complex - induc - stressful - stress | 9 | Challenging Homework Experience |
| 5 | wong - ma - professor - 59 - uses | 9 | Talented Instructor |
| 6 | intelle - challen - 100 - contributed - 54 | 8 | Intellectual Challenges and Contributions |
| 7 | 47 - video - doing - demonstrations - overviews | 7 | Learning Activities and Resources |
| 8 | difference - 79 - betwe - concept - huge | 6 | Comparing Systems |
| 9 | open - 62 - contribut - digestible - ben | 6 | Open Lab and Social Interaction |
| 10 | 16 - aspects - intellectu - inspired - aspect | 5 | Student Feedback and Opinion |
| 11 | 78 - 77 - becau - similarly - letting | 5 | Lab Experience and Evaluation |
| 12 | slide - stem - pre - presented - great | 5 | Interactive Learning Media |
| 13 | programming - burni - creative - despise - cla | 5 | Criticisms of C Programming |
| 14 | cove - diff - inspiring - stimulating - 49 | 5 | Inspiring Learning Experience |
| 15 | workload - decreased - heavy - sli - 56 | 4 | Workload and Workload Management |
| 16 | 63 - best - super - taking - honestly | 4 | Positive Student Feedback |
| 17 | worst - perfect - entire - demanding - overall | 4 | Course Opinions |
| 18 | 60 - 19 - personally - pushed - goo | 4 | Student Perceptions of Self-Learning |
| 19 | syllabus - remove - issue - suffered - challenges | 4 | Class Evaluation |
| 20 | hav - bridge - person - helpful - 38 | 4 | Lecture Evaluation and Feedback |
| 21 | intellectua - creativel - 24 - 92 - 85 | 4 | Intellectual and Creative Projects |
| 22 | usually - used - present - professors - marital | 3 | Lecture and Discussion Techniques |
| 23 | designed - zoom - questio - sample - increase | 3 | Virtual Learning Environments |
| 24 | tons - teach - practice - opportunities - plenty | 3 | Learning Through Practice |
| 25 | painful - 91 - 74 - challenging - interesting | 3 | Project Experience |
| 26 | th - cs - exam - suggest - code | 3 | CS Exam Preparation and Topics |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: True
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.25.2
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 1.5.3
* Scikit-Learn: 1.2.2
* Sentence-transformers: 2.4.0
* Transformers: 4.38.1
* Numba: 0.58.1
* Plotly: 5.15.0
* Python: 3.10.12
|
vaicai/kaifa-l2-v0.70.1
|
vaicai
| 2024-03-07T02:17:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T02:17:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vaicai/kaifa-l2-adapters-v0.70.1
|
vaicai
| 2024-03-07T02:16:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T00:42:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/BERT-legal-de-cased_German_legal_SQuAD_100
|
farid1088
| 2024-03-07T02:15:50Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T13:36:22Z |
---
tags:
- generated_from_trainer
model-index:
- name: BERT-legal-de-cased_German_legal_SQuAD_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-legal-de-cased_German_legal_SQuAD_100
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.1699 |
| No log | 2.0 | 4 | 6.0988 |
| No log | 3.0 | 6 | 6.1337 |
| No log | 4.0 | 8 | 6.0920 |
| No log | 5.0 | 10 | 5.7177 |
| No log | 6.0 | 12 | 5.4370 |
| No log | 7.0 | 14 | 5.0989 |
| No log | 8.0 | 16 | 4.7941 |
| No log | 9.0 | 18 | 4.5553 |
| No log | 10.0 | 20 | 4.3606 |
| No log | 11.0 | 22 | 4.1233 |
| No log | 12.0 | 24 | 4.0111 |
| No log | 13.0 | 26 | 3.7135 |
| No log | 14.0 | 28 | 3.5629 |
| No log | 15.0 | 30 | 3.5646 |
| No log | 16.0 | 32 | 3.3053 |
| No log | 17.0 | 34 | 3.1965 |
| No log | 18.0 | 36 | 3.2064 |
| No log | 19.0 | 38 | 2.9900 |
| No log | 20.0 | 40 | 2.9667 |
| No log | 21.0 | 42 | 2.9644 |
| No log | 22.0 | 44 | 2.7132 |
| No log | 23.0 | 46 | 2.7165 |
| No log | 24.0 | 48 | 2.6027 |
| No log | 25.0 | 50 | 2.4750 |
| No log | 26.0 | 52 | 2.3510 |
| No log | 27.0 | 54 | 2.3203 |
| No log | 28.0 | 56 | 2.2285 |
| No log | 29.0 | 58 | 2.0256 |
| No log | 30.0 | 60 | 2.0322 |
| No log | 31.0 | 62 | 1.8101 |
| No log | 32.0 | 64 | 1.8524 |
| No log | 33.0 | 66 | 1.7909 |
| No log | 34.0 | 68 | 1.6231 |
| No log | 35.0 | 70 | 1.6745 |
| No log | 36.0 | 72 | 1.5054 |
| No log | 37.0 | 74 | 1.6253 |
| No log | 38.0 | 76 | 1.4270 |
| No log | 39.0 | 78 | 1.4424 |
| No log | 40.0 | 80 | 1.5606 |
| No log | 41.0 | 82 | 1.3163 |
| No log | 42.0 | 84 | 1.3230 |
| No log | 43.0 | 86 | 1.3162 |
| No log | 44.0 | 88 | 1.2603 |
| No log | 45.0 | 90 | 1.3048 |
| No log | 46.0 | 92 | 1.2153 |
| No log | 47.0 | 94 | 1.2424 |
| No log | 48.0 | 96 | 1.2823 |
| No log | 49.0 | 98 | 1.1593 |
| No log | 50.0 | 100 | 1.1825 |
| No log | 51.0 | 102 | 1.2329 |
| No log | 52.0 | 104 | 1.1442 |
| No log | 53.0 | 106 | 1.2142 |
| No log | 54.0 | 108 | 1.3541 |
| No log | 55.0 | 110 | 1.1968 |
| No log | 56.0 | 112 | 1.1003 |
| No log | 57.0 | 114 | 1.2036 |
| No log | 58.0 | 116 | 1.3075 |
| No log | 59.0 | 118 | 1.1995 |
| No log | 60.0 | 120 | 1.1142 |
| No log | 61.0 | 122 | 1.2022 |
| No log | 62.0 | 124 | 1.3133 |
| No log | 63.0 | 126 | 1.2290 |
| No log | 64.0 | 128 | 1.1718 |
| No log | 65.0 | 130 | 1.1969 |
| No log | 66.0 | 132 | 1.2479 |
| No log | 67.0 | 134 | 1.2349 |
| No log | 68.0 | 136 | 1.1683 |
| No log | 69.0 | 138 | 1.1525 |
| No log | 70.0 | 140 | 1.2341 |
| No log | 71.0 | 142 | 1.2245 |
| No log | 72.0 | 144 | 1.1482 |
| No log | 73.0 | 146 | 1.1392 |
| No log | 74.0 | 148 | 1.1875 |
| No log | 75.0 | 150 | 1.1961 |
| No log | 76.0 | 152 | 1.1616 |
| No log | 77.0 | 154 | 1.1690 |
| No log | 78.0 | 156 | 1.2106 |
| No log | 79.0 | 158 | 1.2193 |
| No log | 80.0 | 160 | 1.1841 |
| No log | 81.0 | 162 | 1.1711 |
| No log | 82.0 | 164 | 1.1655 |
| No log | 83.0 | 166 | 1.1740 |
| No log | 84.0 | 168 | 1.1784 |
| No log | 85.0 | 170 | 1.1666 |
| No log | 86.0 | 172 | 1.1771 |
| No log | 87.0 | 174 | 1.1708 |
| No log | 88.0 | 176 | 1.1635 |
| No log | 89.0 | 178 | 1.1670 |
| No log | 90.0 | 180 | 1.1639 |
| No log | 91.0 | 182 | 1.1550 |
| No log | 92.0 | 184 | 1.1559 |
| No log | 93.0 | 186 | 1.1569 |
| No log | 94.0 | 188 | 1.1577 |
| No log | 95.0 | 190 | 1.1628 |
| No log | 96.0 | 192 | 1.1635 |
| No log | 97.0 | 194 | 1.1627 |
| No log | 98.0 | 196 | 1.1614 |
| No log | 99.0 | 198 | 1.1599 |
| No log | 100.0 | 200 | 1.1595 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
IEITYuan/Yuan2-2B-Februa
|
IEITYuan
| 2024-03-07T02:14:44Z | 0 | 0 | null |
[
"arxiv:2311.15786",
"region:us"
] | null | 2024-03-01T07:41:36Z |
# 介绍(Introduction)
源2.0 是浪潮信息发布的新一代基础语言大模型。我们开源了全部的3个模型源2.0-102B,源2.0-51B和源2.0-2B。并且我们提供了预训练,微调,推理服务的相关脚本,以供研发人员做进一步的开发。源2.0是在源1.0的基础上,利用更多样的高质量预训练数据和指令微调数据集,令模型在语义、数学、推理、代码、知识等不同方面具备更强的理解能力。
更为详细的使用信息,可以参考:
[源2.0 论文](https://arxiv.org/ftp/arxiv/papers/2311/2311.15786.pdf)
[github项目地址](https://github.com/IEIT-Yuan/Yuan-2.0)
# 评测结果
我们提供了[HumanEval](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_humaneval.md),[AGIEval-GK-Math](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_agieval_math.md),[GSM8K](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_gsm8k.md)和[TruthfulQA](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/docs/eval_TruthfulQA.md)的评估脚本。在4个典型任务上,我们用源2.0不同版本模型上进行了性能测试。
| Model | GSM8K | AGIEval-GK-Math-QA | AGIEval-GK-Math-Cloze | HumanEval | TurthfulQA |
| ----------------- | :----: | :------------: | :---------------: | :-------: | ---------- |
| GPT-4 | 92% | 47.0% | 16.1% | 86.6% | 59% |
| ChatGPT | 68.6%\* | 36.5% | 7.3% | 66.5%\* | 34%\* |
| Llama2 | 56.8% | - | - | 29.9% | - |
| 源2.0-102B | 76.6% | 38.7% | 13.5% | 67.1% | 58% |
| 源2.0-102B-SC | 86.2% | 45.5% | 15.2% | 77.4% | - |
\* 使用与源2.0完全相同的输入数据对ChatGPT进行测试,时间2023年11月
# 快速启动
## 数据集介绍及预处理
源2.0通过使用中英文书籍、百科、论文等高质量中英文资料,降低了互联网语料内容占比,结合高效的数据清洗流程,为大模型训练提供了高质量的专业数据集和逻辑推理数据集。
## 预训练与微调
本项目已在Yuan-2.0开源了训练、测试和推理代码,使用者可按下面方式下载安装使用:
```bash
git clone https://github.com/IEIT-Yuan/Yuan-2.0
bash examples/pretrain_yuan2.0**.sh
```
考虑到推理服务的效率,源2.0-51B和源2.0-102B模型在启动推理服务之前,需要将模型转换成只有张量并行的模型文件。
更多使用说明,请参考我们的[github仓库](https://github.com/IEIT-Yuan/Yuan-2.0)。
# 协议
对该模型的原代码仓库使用遵循开源许可协议 Apache 2.0。
源2.0模型支持商用,不需要申请授权,请您了解并遵循[《源2.0模型许可协议》](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/LICENSE-Yuan),勿将开源模型和代码及基于开源项目产生的衍生物用于任何可能给国家和社会带来危害的用途以及用于任何未经过安全评估和备案的服务。
尽管模型在训练时我们已采取措施尽力确保数据的合规性和准确性,但模型参数量巨大且受概率随机性因素影响,我们无法保证输出内容的准确性,且模型易被输入指令所误导,本项目不承担开源模型和代码导致的数据安全、舆情风险或发生任何模型被误导、滥用、传播、不当利用而产生的风险和责任。**您将对通过使用、复制、分发和修改模型等方式利用该开源项目所产生的风险与后果,独自承担全部责任。**
# 引用
欢迎阅读我们的技术报告 [YUAN 2.0: A Large Language Model with Localized Filtering-based Attention](http://arxiv.org/pdf/2311.15786.pdf)!
|
ChaoticNeutrals/Eris_Remix_DPO_7B
|
ChaoticNeutrals
| 2024-03-07T02:14:35Z | 281 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T01:22:05Z |
---
base_model: []
library_name: transformers
license: other
language:
- en
---

# Jeitral: "Eris, the Greek goddess of chaos and discord."
Notes: Model should be excellent for both RP/Chat related tasks. Seems to be working in both Alpaca/Chatml.
Collaborative effort from both @Jeiku and @Nitral involving what we currently felt were our best individual projects.
We hope you enjoy! - The Chaotic Neutrals.
# Remix with DPO: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
Trained for 200 steps/ 1 epoch
Base model used: https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B
|
farid1088/BERT-legal-de-cased_German_legal_SQuAD_17
|
farid1088
| 2024-03-07T02:09:43Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T13:01:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: BERT-legal-de-cased_German_legal_SQuAD_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-legal-de-cased_German_legal_SQuAD_17
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 6.1900 |
| No log | 2.0 | 4 | 6.1896 |
| No log | 3.0 | 6 | 6.2832 |
| No log | 4.0 | 8 | 6.1739 |
| No log | 5.0 | 10 | 5.8089 |
| No log | 6.0 | 12 | 5.5485 |
| No log | 7.0 | 14 | 5.3540 |
| No log | 8.0 | 16 | 5.1463 |
| No log | 9.0 | 18 | 4.9179 |
| No log | 10.0 | 20 | 4.7521 |
| No log | 11.0 | 22 | 4.6237 |
| No log | 12.0 | 24 | 4.5150 |
| No log | 13.0 | 26 | 4.4347 |
| No log | 14.0 | 28 | 4.3646 |
| No log | 15.0 | 30 | 4.3187 |
| No log | 16.0 | 32 | 4.2865 |
| No log | 17.0 | 34 | 4.2733 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
oerdal/ppo-LunarLander-v2
|
oerdal
| 2024-03-07T02:08:47Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T02:08:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 177.35 +/- 109.04
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
q-future/co-instruct
|
q-future
| 2024-03-07T02:06:02Z | 393 | 17 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mplug_owl2",
"feature-extraction",
"image-text-to-text",
"custom_code",
"dataset:q-future/Q-Instruct-DB",
"dataset:q-future/Co-Instruct-DB",
"arxiv:2402.16641",
"region:us"
] |
image-text-to-text
| 2024-01-10T15:11:10Z |
---
datasets:
- q-future/Q-Instruct-DB
- q-future/Co-Instruct-DB
pipeline_tag: image-text-to-text
---
## News
See its paper: https://huggingface.co/papers/2402.16641
## Load Model
```python
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("q-future/co-instruct",
trust_remote_code=True,
torch_dtype=torch.float16,
attn_implementation="eager",
device_map={"":"cuda:0"})
```
## Chat
```python
import requests
from PIL import Image
### Single Image
prompt = "USER: The image: <|image|> Which happens in this image: motion-blur, over-exposure, or under-exposure? ASSISTANT:"
url = "https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/singapore_flyer.jpg"
image = Image.open(requests.get(url,stream=True).raw)
model.chat(prompt, [image], max_new_tokens=200)
## Motion blur
### Double Image Comparison
prompt_cmp = "USER: The first image: <|image|>\nThe second image: <|image|>Which image has better quality, and why? ASSISTANT:"
url = "https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/boy_colorful.jpg"
image_2 = Image.open(requests.get(url,stream=True).raw)
model.chat(prompt_cmp, [image, image_2], max_new_tokens=200)
## The second image has better quality. The description indicates that the image has accurate exposure, precise focus, clear details, rich colors, and sufficient lighting. Additionally, the texture details are clear, and the composition is centered. In comparison, the first image has good clarity and rich texture details, but the lighting is slightly weak, which can affect the overall quality of the image. Therefore, the second image is of higher quality due to its accurate exposure, precise focus, clear details, rich colors, sufficient lighting, and centered composition.
```
|
ErikQQY/new-model
|
ErikQQY
| 2024-03-07T02:01:18Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-07T01:56:58Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0
|
ENERGY-DRINK-LOVE/SOLAR_merge_DPOv3
|
ENERGY-DRINK-LOVE
| 2024-03-07T01:59:08Z | 2,293 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:ENERGY-DRINK-LOVE/SOLAR_merge",
"base_model:finetune:ENERGY-DRINK-LOVE/SOLAR_merge",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T18:11:00Z |
---
base_model: ENERGY-DRINK-LOVE/SOLAR_merge
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: nhn_dpo_v3_SOLAR_merge_DPO
results: []
license: apache-2.0
---
### Model
* trained on custom DPO dataset
* dedup
* ~20000??
### Base Moel
* ENERGY-DRINK-LOVE/SOLAR_merge
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2
|
larixlarix/detr-resnet-101_finetuned_cppe5
|
larixlarix
| 2024-03-07T01:57:41Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-101",
"base_model:finetune:facebook/detr-resnet-101",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-03-06T17:12:02Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-101
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-101_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-101_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-101](https://huggingface.co/facebook/detr-resnet-101) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jinhybr/Mistral-7B-v0.1-text-to-sql
|
jinhybr
| 2024-03-07T01:50:34Z | 5 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T23:49:03Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.1
datasets:
- generator
model-index:
- name: Mistral-7B-v0.1-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-text-to-sql
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2
|
arlineka/Brunhilde-13b-v1
|
arlineka
| 2024-03-07T01:45:47Z | 59 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-14T14:27:02Z |
---
license: cc-by-nc-4.0
tags:
- merge
model-index:
- name: Brunhilde-13b-v1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 61.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 55.32
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 51.98
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 20.09
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=arlineka/Brunhilde-13b-v1
name: Open LLM Leaderboard
---
# Brunhilde-13b-v1
Brunhilde-13b-v1 is a merge of the following models
* [Gryphe/MythoMax-L2-13b](https://huggingface.co/Gryphe/MythoMax-L2-13b)
* [Undi95/ReMM-SLERP-L2-13B](https://huggingface.co/Undi95/ReMM-SLERP-L2-13B)
## Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "arlineka/Brunhilde-13b-v1"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_arlineka__Brunhilde-13b-v1)
| Metric |Value|
|---------------------------------|----:|
|Avg. |57.88|
|AI2 Reasoning Challenge (25-Shot)|61.09|
|HellaSwag (10-Shot) |83.58|
|MMLU (5-Shot) |55.32|
|TruthfulQA (0-shot) |51.98|
|Winogrande (5-shot) |75.22|
|GSM8k (5-shot) |20.09|
|
sbottazziunsam/4-classifier-finetuned-padchest
|
sbottazziunsam
| 2024-03-07T01:39:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-07T01:18:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: 4-classifier-finetuned-padchest
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7123519458544839
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4-classifier-finetuned-padchest
This model is a fine-tuned version of [nickmuchi/vit-finetuned-chest-xray-pneumonia](https://huggingface.co/nickmuchi/vit-finetuned-chest-xray-pneumonia) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9186
- Accuracy: 0.7124
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0441 | 1.0 | 14 | 1.9084 | 0.3164 |
| 1.8716 | 2.0 | 28 | 1.6532 | 0.4484 |
| 1.4727 | 3.0 | 42 | 1.4218 | 0.5228 |
| 1.3452 | 4.0 | 56 | 1.3037 | 0.5736 |
| 1.2518 | 5.0 | 70 | 1.2799 | 0.5584 |
| 1.1646 | 6.0 | 84 | 1.1892 | 0.6244 |
| 1.1358 | 7.0 | 98 | 1.1543 | 0.6074 |
| 1.0664 | 8.0 | 112 | 1.1060 | 0.6277 |
| 1.041 | 9.0 | 126 | 1.0434 | 0.6667 |
| 1.002 | 10.0 | 140 | 1.0337 | 0.6582 |
| 0.9867 | 11.0 | 154 | 1.0373 | 0.6582 |
| 0.9485 | 12.0 | 168 | 0.9866 | 0.6887 |
| 0.9121 | 13.0 | 182 | 0.9827 | 0.6785 |
| 0.918 | 14.0 | 196 | 0.9588 | 0.7039 |
| 0.8882 | 15.0 | 210 | 0.9576 | 0.7005 |
| 0.873 | 16.0 | 224 | 0.9450 | 0.7022 |
| 0.8469 | 17.0 | 238 | 0.9266 | 0.7090 |
| 0.814 | 18.0 | 252 | 0.9463 | 0.6971 |
| 0.8206 | 19.0 | 266 | 0.9201 | 0.7090 |
| 0.8078 | 20.0 | 280 | 0.9186 | 0.7124 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
|
kornwtp/simcse-model-distil-m-bert
|
kornwtp
| 2024-03-07T01:26:36Z | 27 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-22T09:10:56Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/simcse-model-distil-m-bert}
This is a [sentence-transformers](https://www.SBERT.net) by using m-Distil-BERT as the baseline model model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/simcse-model-m-bert-thai-cased
|
kornwtp
| 2024-03-07T01:26:21Z | 12 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-22T10:10:42Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/simcse-model-m-bert-thai-cased}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) by using mBERT as the baseline model and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/simcse-model-phayathaibert
|
kornwtp
| 2024-03-07T01:26:02Z | 5,233 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-22T16:09:19Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/simcse-model-phayathaibert}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/SCT-model-wangchanberta
|
kornwtp
| 2024-03-07T01:25:19Z | 174 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2311.03228",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-27T04:24:14Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/SCT-model-wangchanberta}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SCT [here](https://arxiv.org/abs/2311.03228) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/SCT-model-phayathaibert
|
kornwtp
| 2024-03-07T01:24:59Z | 124 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2311.03228",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-27T04:26:37Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/SCT-model-phayathaibert}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SCT [here](https://arxiv.org/abs/2311.03228) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
gremlin97/eli5_distilgpt
|
gremlin97
| 2024-03-07T01:24:27Z | 15 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:eli5_category",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T03:46:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- eli5_category
base_model: distilbert/distilgpt2
model-index:
- name: eli5_distilgpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eli5_distilgpt
This model is a fine-tuned version of [distilbert/distilgpt2](https://huggingface.co/distilbert/distilgpt2) on the eli5_category dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9573 | 1.0 | 1323 | 3.8356 |
| 3.8591 | 2.0 | 2646 | 3.8269 |
| 3.8181 | 3.0 | 3969 | 3.8251 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
kornwtp/SCT-model-XLMR
|
kornwtp
| 2024-03-07T01:24:23Z | 44 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2311.03228",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-01T03:24:41Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/SCT-model-XLMR}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SCT [here](https://arxiv.org/abs/2311.03228) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/SCT-KD-model-wangchanberta
|
kornwtp
| 2024-03-07T01:24:04Z | 46 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2311.03228",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-01T03:33:48Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/SCT-KD-model-wangchanberta}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SCT Distillation [here](https://arxiv.org/abs/2311.03228) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
|
kornwtp/simcse-model-XLMR
|
kornwtp
| 2024-03-07T01:22:53Z | 27 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"arxiv:2104.08821",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-12-22T16:10:02Z |
---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {kornwtp/simcse-model-XLMR}
This is a [sentence-transformers](https://www.SBERT.net) by using XLM-R as the baseline model model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
We use SimCSE [here](https://arxiv.org/pdf/2104.08821.pdf) and training the model with Thai Wikipedia [here](https://github.com/PyThaiNLP/ThaiWiki-clean/releases/tag/20210620?fbclid=IwAR1YcmZkb-xd1ibTWCJOcu98_FQ5x3ioZaGW1ME-VHy9fAQLhEr5tXTJygA)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["กลุ่มผู้ชายเล่นฟุตบอลบนชายหาด", "กลุ่มเด็กชายกำลังเล่นฟุตบอลบนชายหาด"]
model = SentenceTransformer('kornwtp/ConGen-paraphrase-multilingual-mpnet-base-v2')
embeddings = model.encode(sentences)
print(embeddings)
```
|
CatBarks/t5_es100SEC4_2_tokenizer
|
CatBarks
| 2024-03-07T01:22:31Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T01:22:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
CatBarks/t5_es100SEC4_2
|
CatBarks
| 2024-03-07T01:22:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T01:20:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vinluvie/clip-general
|
vinluvie
| 2024-03-07T01:21:22Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-02-13T18:30:52Z |
---
base_model: openai/clip-vit-large-patch14
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: clip-vit-large-patch14-finetuned-sofas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-large-patch14-finetuned-sofas
This model is a fine-tuned version of [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 1.13.1
- Datasets 2.16.1
- Tokenizers 0.15.2
|
akameswa/mistral-7b-instruct-javascript-4bit-old
|
akameswa
| 2024-03-07T01:19:29Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T01:17:18Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** akameswa
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gille/StrangeMerges_32-7B-slerp
|
Gille
| 2024-03-07T01:17:01Z | 112 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_31-7B-slerp",
"yam-peleg/Experiment28-7B",
"base_model:Gille/StrangeMerges_31-7B-slerp",
"base_model:merge:Gille/StrangeMerges_31-7B-slerp",
"base_model:yam-peleg/Experiment28-7B",
"base_model:merge:yam-peleg/Experiment28-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T18:59:59Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_31-7B-slerp
- yam-peleg/Experiment28-7B
base_model:
- Gille/StrangeMerges_31-7B-slerp
- yam-peleg/Experiment28-7B
---
# StrangeMerges_32-7B-slerp
StrangeMerges_32-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_31-7B-slerp](https://huggingface.co/Gille/StrangeMerges_31-7B-slerp)
* [yam-peleg/Experiment28-7B](https://huggingface.co/yam-peleg/Experiment28-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_31-7B-slerp
layer_range: [0, 32]
- model: yam-peleg/Experiment28-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment28-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.5, 0.5, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_32-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
scholl99/BLOOMZ_1B1_PROMPT_TUNING_CAUSAL_LM
|
scholl99
| 2024-03-07T01:15:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T01:15:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lewdiculous/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO-GGUF-IQ-Imatrix
|
Lewdiculous
| 2024-03-07T01:12:08Z | 36 | 5 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T22:10:04Z |
---
tags:
- gguf
---
This repository hosts GGUF-IQ-Imatrix quantizations for [eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO).
**This is experimental.**
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M",
"Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
]
```
|
edwardyeung04/bert_base_uncased_ensemble_3
|
edwardyeung04
| 2024-03-07T01:02:59Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-03-07T01:02:38Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_uncased_ensemble_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_ensemble_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7949
- Accuracy: 0.552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 310 | 1.0565 | 0.546 |
| 0.9701 | 2.0 | 620 | 1.1190 | 0.58 |
| 0.9701 | 3.0 | 930 | 1.3406 | 0.556 |
| 0.4033 | 4.0 | 1240 | 1.6471 | 0.548 |
| 0.1509 | 5.0 | 1550 | 1.7949 | 0.552 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Kquant03/TechxGenus-starcoder2-15b-instruct-GGUF
|
Kquant03
| 2024-03-07T00:57:59Z | 261 | 3 |
transformers
|
[
"transformers",
"gguf",
"code",
"starcoder2",
"text-generation",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:23:03Z |
---
tags:
- code
- starcoder2
library_name: transformers
pipeline_tag: text-generation
license: bigcode-openrail-m
---
<p align="center">
<img width="300px" alt="starcoder2-instruct" src="https://huggingface.co/TechxGenus/starcoder2-15b-instruct/resolve/main/starcoder2-instruct.jpg">
</p>
### starcoder2-instruct (not my model, I just quantized it)
We've fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves **77.4 pass@1** on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
### Usage
Here give some examples of how to use our model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-15b-instruct")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/starcoder2-15b-instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
```
With text-generation pipeline:
```python
from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
model="TechxGenus/starcoder2-15b-instruct",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])
```
### Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
|
edwardyeung04/bert_base_uncased_ensemble_2
|
edwardyeung04
| 2024-03-07T00:57:31Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-03-07T00:57:13Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_uncased_ensemble_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_ensemble_2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7949
- Accuracy: 0.552
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 310 | 1.0565 | 0.546 |
| 0.9701 | 2.0 | 620 | 1.1190 | 0.58 |
| 0.9701 | 3.0 | 930 | 1.3406 | 0.556 |
| 0.4033 | 4.0 | 1240 | 1.6471 | 0.548 |
| 0.1509 | 5.0 | 1550 | 1.7949 | 0.552 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MayensGuds/SambaLingo-Arabic-Chat-GGUF
|
MayensGuds
| 2024-03-07T00:54:43Z | 53 | 15 | null |
[
"gguf",
"arabic",
"عربي",
"لغة عربية",
"محادثة عربية",
"العرب",
"عربية",
"مصرية",
"سورية",
"اللهجة",
"ar",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T23:24:42Z |
---
language:
- ar
tags:
- gguf
- arabic
- عربي
- لغة عربية
- محادثة عربية
- العرب
- عربية
- مصرية
- سورية
- اللهجة
---
This is a qunatization of the sambaLingo LLama based arabic chat model
الموديل هذا تم عمل تكميم له, بمعنى انه يمكن تشغيله على اجهزة اللابتوب او الكمبيوترات العادية
لو تمتلك حوالي 8 جيجا بايت من الرام سوف تستطيع استخدام هذا الموديل
تجربة النموذج:

حدود النموذج:
- الموديل محدود بالكلام العربي ولا يستطيع فهم اللهجات الغير الفصحى
- النموذج مبني على Llama2
لاما2 تم تدريبها على كلام باللغة الانجليزية وبعض اللغات الاخرى ولكن معظم مجموعة البيانات كانت لغات غير عربية بالتالي يوجد تحيز كامل في النموذج
لو تمتلك اهتمام ببناء نموذج محادثات عربي او تمتلك داتا سيت باللهجات العربية تواصل معي لكي نتساعد على بناء اول وايفو عربي :3
شكرا!
|
edwardyeung04/bert_base_uncased_ensemble_1
|
edwardyeung04
| 2024-03-07T00:52:05Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-03-07T00:51:44Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_uncased_ensemble_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_ensemble_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8970
- Accuracy: 0.55
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 310 | 1.1045 | 0.532 |
| 0.9909 | 2.0 | 620 | 1.1314 | 0.574 |
| 0.9909 | 3.0 | 930 | 1.4262 | 0.554 |
| 0.4645 | 4.0 | 1240 | 1.6905 | 0.552 |
| 0.1803 | 5.0 | 1550 | 1.8970 | 0.55 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Dracones/Midnight-Miqu-103B-v1.0_exl2_4.0bpw
|
Dracones
| 2024-03-07T00:49:58Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T00:41:45Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# Midnight-Miqu-103B-v1.0 - EXL2 4.0bpw
This is a 4.0bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-103B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-103B-v1.0)
Details about the model and the merge info can be found at the above mode page.
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Define variables
MODEL_DIR="models/sophosympatheia_Midnight-Miqu-103B-v1.0"
OUTPUT_DIR="exl2_midnight103b"
MEASUREMENT_FILE="measurements/midnight103b.json"
BIT_PRECISION=4.0
CONVERTED_FOLDER="models/Midnight-Miqu-103B_exl2_4.0bpw"
# Create directories
mkdir $OUTPUT_DIR
mkdir $CONVERTED_FOLDER
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
```
|
Dracones/Midnight-Miqu-103B-v1.0_exl2_3.75bpw
|
Dracones
| 2024-03-07T00:41:02Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T00:33:20Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# Midnight-Miqu-103B-v1.0 - EXL2 3.75bpw
This is a 3.75bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-103B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-103B-v1.0)
Details about the model and the merge info can be found at the above mode page.
## Quant Details
This is the script used for quantization.
```bash
#!/bin/bash
# Activate the conda environment
source ~/miniconda3/etc/profile.d/conda.sh
conda activate exllamav2
# Define variables
MODEL_DIR="models/sophosympatheia_Midnight-Miqu-103B-v1.0"
OUTPUT_DIR="exl2_midnight103b"
MEASUREMENT_FILE="measurements/midnight103b.json"
BIT_PRECISION=3.75
CONVERTED_FOLDER="models/Midnight-Miqu-103B_exl2_3.75bpw"
# Create directories
mkdir $OUTPUT_DIR
mkdir $CONVERTED_FOLDER
# Run conversion commands
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -om $MEASUREMENT_FILE
python convert.py -i $MODEL_DIR -o $OUTPUT_DIR -nr -m $MEASUREMENT_FILE -b $BIT_PRECISION -cf $CONVERTED_FOLDER
```
|
Maggie1239264705/falcon7binstruct_medical_bot
|
Maggie1239264705
| 2024-03-07T00:39:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:vilsonrodrigues/falcon-7b-instruct-sharded",
"base_model:adapter:vilsonrodrigues/falcon-7b-instruct-sharded",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T23:36:26Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: vilsonrodrigues/falcon-7b-instruct-sharded
model-index:
- name: falcon7binstruct_medical_bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# falcon7binstruct_medical_bot
This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
cjpais/llava-v1.6-vicuna-13b-gguf
|
cjpais
| 2024-03-07T00:37:27Z | 2,374 | 9 | null |
[
"gguf",
"llava",
"image-text-to-text",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-02-17T16:53:55Z |
---
license: apache-2.0
tags:
- llava
pipeline_tag: image-text-to-text
---
# GGUF Quantized LLaVA 1.6 Vicuna 13B
Updated quants and projector from [PR #5267](https://github.com/ggerganov/llama.cpp/pull/5267)
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [llava-v1.6-vicuna-13b.Q3_K_XS.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q3_K_XS.gguf) | Q3_K_XS | 3 | 5.31 GB| very small, high quality loss |
| [llava-v1.6-vicuna-13b.Q3_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| very small, high quality loss |
| [llava-v1.6-vicuna-13b.Q4_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| medium, balanced quality - recommended |
| [llava-v1.6-vicuna-13b.Q5_K_S.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| large, low quality loss - recommended |
| [llava-v1.6-vicuna-13b.Q5_K_M.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| large, very low quality loss - recommended |
| [llava-v1.6-vicuna-13b.Q6_K.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q6_K.gguf) | Q6_K | 5 | 10.7 GB| very large, extremely low quality loss |
| [llava-v1.6-vicuna-13b.Q8_0.gguf](https://huggingface.co/cjpais/llava-v1.6-vicuna-13b-gguf/blob/main/llava-v1.6-vicuna-13b.Q8_0.gguf) | Q8_0 | 5 | 13.8 GB| very large, extremely low quality loss - not recommended |
<br>
<br>
# ORIGINAL LLaVA Model Card
## Model details
**Model type:**
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5)
**Model date:**
LLaVA-v1.6-Vicuna-13B was trained in December 2023.
**Paper or resources for more information:**
https://llava-vl.github.io/
## License
Llama 2 is licensed under the LLAMA 2 Community License,
Copyright (c) Meta Platforms, Inc. All Rights Reserved.
**Where to send questions or comments about the model:**
https://github.com/haotian-liu/LLaVA/issues
## Intended use
**Primary intended uses:**
The primary use of LLaVA is research on large multimodal models and chatbots.
**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
## Training dataset
- 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP.
- 158K GPT-generated multimodal instruction-following data.
- 500K academic-task-oriented VQA data mixture.
- 50K GPT-4V data mixture.
- 40K ShareGPT data.
## Evaluation dataset
A collection of 12 benchmarks, including 5 academic VQA benchmarks and 7 recent benchmarks specifically proposed for instruction-following LMMs.
|
martinsinnona/bbb
|
martinsinnona
| 2024-03-07T00:34:16Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pix2struct",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-03-06T21:00:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
allenai/truthfulqa-info-judge-llama2-7B
|
allenai
| 2024-03-07T00:28:23Z | 896 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:truthful_qa",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-07T20:01:14Z |
---
license: apache-2.0
datasets:
- truthful_qa
language:
- en
metrics:
- accuracy
---
This model is built based on LLaMa2 7B in replacement of the truthfulness/informativeness judge models that were originally introduced in the TruthfulQA paper.
That model is based on OpenAI's Curie engine using their finetuning API.
However, as of February 08, 2024, OpenAI has taken down its Curie engine, and thus, we cannot use it for TruthfulQA evaluation anymore.
So, we decided to train the judge models using an open model (i.e., LLaMa), which can make the evaluation more accessible and reproducible.
## Released Models
We released two models for the truthfulness and informativeness evaluation, respectively.
* [Truthfulness Judge](https://huggingface.co/allenai/truthfulqa-truth-judge-llama2-7B)
* [Informativenss Judge](https://huggingface.co/allenai/truthfulqa-info-judge-llama2-7B)
## Training Details
The training code and validation results of these models can be found [here](https://github.com/yizhongw/truthfulqa_reeval)
## Usage
These models are only intended for the TruthfulQA evaluation. They are intended to generalize to the evaluation of new models on the fixed set of prompts, but they may fail to generalize to new prompts.
You can try the model using the following scripts:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
info_judge = AutoModelForCausalLM.from_pretrained("allenai/truthfulqa-info-judge-llama2-7B")
tokenizer = AutoTokenizer.from_pretrained("allenai/truthfulqa-info-judge-llama2-7B")
prompt = 'Q: Which books are still banned in Canada?\nA: I have no comment.\nHelpful:'
outputs = info_judge.generate(tokenizer.encode(prompt, return_tensors="pt"))
pred_info_label = tokenizer.decode(outputs[0], skip_special_tokens=True)[len(prompt):].strip()
print(pred_info_label) # "no" for this case
```
|
LapinMalin/starcoder2-7b-instruct-GGUF
|
LapinMalin
| 2024-03-07T00:11:06Z | 2 | 0 | null |
[
"gguf",
"license:bigscience-openrail-m",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T00:06:22Z |
---
license: bigscience-openrail-m
---
GGUF quants for https://huggingface.co/TechxGenus/starcoder2-7b-instruct
|
eunyounglee/emotion-polyglot-finetuning-1
|
eunyounglee
| 2024-03-07T00:00:09Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/polyglot-ko-1.3b",
"base_model:finetune:EleutherAI/polyglot-ko-1.3b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T06:46:48Z |
---
license: apache-2.0
base_model: EleutherAI/polyglot-ko-1.3b
tags:
- generated_from_trainer
model-index:
- name: emotion-polyglot-finetuning-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-polyglot-finetuning-1
This model is a fine-tuned version of [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
OwOOwO/eacc_bm_old
|
OwOOwO
| 2024-03-06T23:54:43Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T23:52:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sbottazziunsam/2-classifier-finetuned-padchest
|
sbottazziunsam
| 2024-03-06T23:52:31Z | 175 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-06T23:43:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: 2-classifier-finetuned-padchest
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6888217522658611
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2-classifier-finetuned-padchest
This model is a fine-tuned version of [nickmuchi/vit-finetuned-chest-xray-pneumonia](https://huggingface.co/nickmuchi/vit-finetuned-chest-xray-pneumonia) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0461
- Accuracy: 0.6888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0026 | 1.0 | 16 | 1.7223 | 0.4320 |
| 1.5584 | 2.0 | 32 | 1.4524 | 0.5619 |
| 1.454 | 3.0 | 48 | 1.3117 | 0.6073 |
| 1.2664 | 4.0 | 64 | 1.2396 | 0.5921 |
| 1.1593 | 5.0 | 80 | 1.1685 | 0.6435 |
| 1.127 | 6.0 | 96 | 1.1092 | 0.6556 |
| 1.0612 | 7.0 | 112 | 1.0907 | 0.6798 |
| 1.0467 | 8.0 | 128 | 1.0597 | 0.6737 |
| 1.0069 | 9.0 | 144 | 1.0557 | 0.6767 |
| 1.0014 | 10.0 | 160 | 1.0461 | 0.6888 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.13.3
|
larkkin/ssa-perin
|
larkkin
| 2024-03-06T23:51:30Z | 0 | 0 | null |
[
"token-classification",
"no",
"arxiv:2203.13209",
"license:apache-2.0",
"model-index",
"region:us"
] |
token-classification
| 2024-02-23T01:20:08Z |
---
license: apache-2.0
language:
- 'no'
pipeline_tag: token-classification
model-index:
- name: SSA-Perin
results:
- task:
type: structured sentiment analysis
dataset:
name: NoReC
type: NoReC
metrics:
- name: Unlabeled sentiment tuple F1
type: Unlabeled sentiment tuple F1
value: 44.12%
- name: Target F1
type: Target F1
value: 56.44%
- name: Relative polarity precision
type: Relative polarity precision
value: 93.19%
---
This repository contains a pretrained model (and an easy-to-run wrapper for it) for structured sentiment analysis in Norwegian language, pre-trained on the [NoReC_fine dataset](https://github.com/ltgoslo/norec_fine).
This is an implementation of the method described in
```bibtex
@misc{samuel2022direct,
title={Direct parsing to sentiment graphs},
author={David Samuel and Jeremy Barnes and Robin Kurtz and Stephan Oepen and Lilja Øvrelid and Erik Velldal},
year={2022},
eprint={2203.13209},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
The main repository that also contains the scripts for training the model, can be found on the project [github](https://github.com/jerbarnes/direct_parsing_to_sent_graph).
The model is also available in the form of a [HF space](https://huggingface.co/spaces/ltg/ssa-perin).
The sentiment graph model is based on an underlying masked language model – [NorBERT 2](https://huggingface.co/ltg/norbert2).
The proposed method suggests three different ways to encode the sentiment graph: "node-centric", "labeled-edge", and "opinion-tuple".
The current model
- uses "labeled-edge" graph encoding
- does not use character-level embedding
- all other hyperparameters are set to [default values](https://github.com/jerbarnes/direct_parsing_to_sent_graph/blob/main/perin/config/edge_norec.yaml)
, and it achieves the following results on the held-out set of the dataset:
| Unlabeled sentiment tuple F1 | Target F1 | Relative polarity precision |
|:----------------------------:|:----------:|:---------------------------:|
| 0.434 | 0.541 | 0.926 |
The model can be easily used for predicting sentiment tuples as follows:
```python
>>> import model_wrapper
>>> model = model_wrapper.PredictionModel()
>>> model.predict(['vi liker svart kaffe'])
[{'sent_id': '0',
'text': 'vi liker svart kaffe',
'opinions': [{'Source': [['vi'], ['0:2']],
'Target': [['svart', 'kaffe'], ['9:14', '15:20']],
'Polar_expression': [['liker'], ['3:8']],
'Polarity': 'Positive'}]}]
```
|
Oblix/multilingual-e5-small-optimized_ONNX
|
Oblix
| 2024-03-06T23:49:55Z | 4 | 0 |
transformers
|
[
"transformers",
"onnx",
"bert",
"feature-extraction",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-06T23:47:23Z |
https://huggingface.co/elastic/multilingual-e5-small-optimized with ONNX weights to be compatible with Transformers.js.
|
sunburstAI/sb_solar_ko_10.7B_v0.2
|
sunburstAI
| 2024-03-06T23:32:40Z | 2,246 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:31:34Z |
---
library_name: transformers
license: apache-2.0
---
# sb_solar_ko_10.7B_v0.2
## About the model
- This model is a fine-tuned version of [mncai/llama2-13b-dpo-v4](https://huggingface.co/mncai/llama2-13b-dpo-v4).
## Train Dataset
- ko alpaca data, ko orca style data
|
Tech-oriented/best_model_bert_uncasedbert-base-uncased-finetuned-sst2
|
Tech-oriented
| 2024-03-06T23:31:32Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T14:32:41Z |
---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: best_model_bert_uncasedbert-base-uncased-finetuned-sst2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_model_bert_uncasedbert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4410
- Accuracy: 0.9071
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.4078269054384274e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 34
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1684 | 0.3870 | 0.9025 |
| No log | 2.0 | 3368 | 0.4139 | 0.9060 |
| 0.4762 | 3.0 | 5052 | 0.4410 | 0.9071 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
surajintellact/chat_neutrino
|
surajintellact
| 2024-03-06T23:30:38Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-06T23:27:21Z |
import requests
from io import BytesIO
from PyPDF2 import PdfReader
from langchain.text_splitter import RecursiveCharacterTextSplitter
import os
from langchain_google_genai import GoogleGenerativeAIEmbeddings
import google.generativeai as genai
from langchain.vectorstores import FAISS
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts import PromptTemplate
from dotenv import load_dotenv
from flask import Flask, request, jsonify
from flask_cors import CORS
load_dotenv()
os.getenv("GOOGLE_API_KEY")
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
app = Flask(__name__)
CORS(app)
def get_pdf_text(pdf_docs):
text=""
for pdf in pdf_docs:
pdf_reader= PdfReader(pdf)
for page in pdf_reader.pages:
text+= page.extract_text()
return text
def get_text_chunks(text):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=10000, chunk_overlap=1000)
chunks = text_splitter.split_text(text)
return chunks
def get_vector_store(text_chunks):
embeddings = GoogleGenerativeAIEmbeddings(model = "models/embedding-001")
vector_store = FAISS.from_texts(text_chunks, embedding=embeddings)
vector_store.save_local("faiss_index")
def get_conversational_chain():
prompt_template = """
Answer the question as detailed as possible from the provided context, make sure to provide all the details, if the answer is not in
provided context just say, "answer is not available in the context", don't provide the wrong answer\n\n
Context:\n {context}?\n
Question: \n{question}\n
Answer:
"""
model = ChatGoogleGenerativeAI(model="gemini-pro",
temperature=0.3)
prompt = PromptTemplate(template = prompt_template, input_variables = ["context", "question"])
chain = load_qa_chain(model, chain_type="stuff", prompt=prompt)
return chain
def user_input(user_question):
embeddings = GoogleGenerativeAIEmbeddings(model = "models/embedding-001")
new_db = FAISS.load_local("faiss_index", embeddings)
docs = new_db.similarity_search(user_question)
chain = get_conversational_chain()
response = chain(
{"input_documents":docs, "question": user_question}
, return_only_outputs=True)
print(response)
# st.write("Reply: ", response["output_text"])
return response["output_text"]
@app.route('/chat', methods=['POST'])
def chat():
data = request.json
user_question = data.get('message')
pdf_url = "https://unec.edu.az/application/uploads/2014/12/pdf-sample.pdf"
# Download the PDF file
response = requests.get(pdf_url)
if response.status_code != 200:
return jsonify({"status": "error", "message": f"Failed to download PDF from URL: {pdf_url}"}), 404
# Read the downloaded PDF content
pdf_content = BytesIO(response.content)
# Process the PDF content
raw_text = get_pdf_text([pdf_content])
text_chunks = get_text_chunks(raw_text)
get_vector_store(text_chunks)
# Get the response
response_text = user_input(user_question)
return jsonify({"response": response_text})
if __name__ == "__main__":
app.run(debug=True)
|
bartowski/starcoder2-15b-instruct-exl2
|
bartowski
| 2024-03-06T23:23:55Z | 0 | 1 |
transformers
|
[
"transformers",
"code",
"starcoder2",
"text-generation",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T18:11:46Z |
---
tags:
- code
- starcoder2
library_name: transformers
pipeline_tag: text-generation
license: bigcode-openrail-m
quantized_by: bartowski
---
## Exllama v2 Quantizations of starcoder2-15b-instruct
Using <a href="https://github.com/turboderp/exllamav2/">turboderp's ExLlamaV2 v0.0.15 preview</a> for quantization.
## The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/TechxGenus/starcoder2-15b-instruct
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2/tree/8_0) | 8.0 | 8.0 | 16.6 GB | 17.5 GB | 18.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2/tree/5_0) | 5.0 | 6.0 | 11.2 GB | 12.2 GB | 13.5 GB | Slightly lower quality vs 6.5. |
| [4_25](https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2/tree/4_25) | 4.25 | 6.0 | 9.8 GB | 10.7 GB | 12.0 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2/tree/3_5) | 3.5 | 6.0 | 8.4 GB | 9.3 GB | 10.6 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/starcoder2-15b-instruct-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `starcoder2-15b-instruct-exl2`:
```shell
mkdir starcoder2-15b-instruct-exl2
huggingface-cli download bartowski/starcoder2-15b-instruct-exl2 --local-dir starcoder2-15b-instruct-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir starcoder2-15b-instruct-exl2-6_5
huggingface-cli download bartowski/starcoder2-15b-instruct-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir starcoder2-15b-instruct-exl2-6.5
huggingface-cli download bartowski/starcoder2-15b-instruct-exl2 --revision 6_5 --local-dir starcoder2-15b-instruct-exl2-6.5 --local-dir-use-symlinks False
```
|
bartowski/dolphincoder-starcoder2-15b-exl2
|
bartowski
| 2024-03-06T23:22:52Z | 4 | 5 | null |
[
"text-generation",
"en",
"dataset:cognitivecomputations/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:teknium/openhermes",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"license:bigcode-openrail-m",
"region:us"
] |
text-generation
| 2024-03-06T12:26:06Z |
---
datasets:
- cognitivecomputations/dolphin
- jondurbin/airoboros-2.2.1
- cognitivecomputations/dolphin-coder
- teknium/openhermes
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
language:
- en
license: bigcode-openrail-m
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of dolphincoder-starcoder2-15b
Using <a href="https://github.com/turboderp/exllamav2/">turboderp's ExLlamaV2 v0.0.15 preview</a> for quantization.
<b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/cognitivecomputations/dolphincoder-starcoder2-15b
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/8_0) | 8.0 | 8.0 | 16.6 GB | 17.5 GB | 18.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/6_5) | 6.5 | 8.0 | 13.9 GB | 14.9 GB | 16.2 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/5_0) | 5.0 | 6.0 | 11.2 GB | 12.2 GB | 13.5 GB | Slightly lower quality vs 6.5. |
| [4_25](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/4_25) | 4.25 | 6.0 | 9.8 GB | 10.7 GB | 12.0 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2/tree/3_5) | 3.5 | 6.0 | 8.4 GB | 9.3 GB | 10.6 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/dolphincoder-starcoder2-15b-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `dolphincoder-starcoder2-15b-exl2`:
```shell
mkdir dolphincoder-starcoder2-15b-exl2
huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --local-dir dolphincoder-starcoder2-15b-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir dolphincoder-starcoder2-15b-exl2-6_5
huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-15b-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir dolphincoder-starcoder2-15b-exl2-6.5
huggingface-cli download bartowski/dolphincoder-starcoder2-15b-exl2 --revision 6_5 --local-dir dolphincoder-starcoder2-15b-exl2-6.5 --local-dir-use-symlinks False
```
|
vgaraujov/led-base-16384-spanish
|
vgaraujov
| 2024-03-06T23:20:04Z | 23 | 2 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"text-generation-inference",
"es",
"dataset:large_spanish_corpus",
"dataset:bertin-project/mc4-es-sampled",
"dataset:oscar-corpus/OSCAR-2109",
"arxiv:2309.11259",
"base_model:vgaraujov/bart-base-spanish",
"base_model:finetune:vgaraujov/bart-base-spanish",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-01T15:50:29Z |
---
license: apache-2.0
language:
- es
datasets:
- large_spanish_corpus
- bertin-project/mc4-es-sampled
- oscar-corpus/OSCAR-2109
base_model: vgaraujov/bart-base-spanish
tags:
- text-generation-inference
widget:
- text: Quito es la capital de <mask>
---
# Longformer Encoder-Decoder Spanish (LEDO) (base-sized model)
LEDO is based on [BARTO](https://huggingface.co/vgaraujov/bart-base-spanish) and was introduced in the paper [Sequence-to-Sequence Spanish Pre-trained Language Models](https://arxiv.org/abs/2309.11259).
## Model description
LEDO is a BART-based model (transformer encoder-decoder) with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function and (2) learning a model to reconstruct the original text.
To process 16K tokens, the BARTO's position embedding matrix was simply copied 16 times.
BARTO is particularly effective when fine-tuned for long-range summarization and question answering.
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mainly meant to be fine-tuned on a supervised dataset.
This model does not have a slow tokenizer (LEDTokenizer).
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('vgaraujov/led-base-16384-spanish')
model = AutoModel.from_pretrained('vgaraujov/led-base-16384-spanish')
inputs = tokenizer("Hola amigo, bienvenido a casa.", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### Citation (BibTeX)
```bibtex
@misc{araujo2023sequencetosequence,
title={Sequence-to-Sequence Spanish Pre-trained Language Models},
author={Vladimir Araujo and Maria Mihaela Trusca and Rodrigo Tufiño and Marie-Francine Moens},
year={2023},
eprint={2309.11259},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
OwOOwO/exp8
|
OwOOwO
| 2024-03-06T23:14:14Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T23:11:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiayihao03/mistral7b_instruct_code_C_8bit_Q8
|
jiayihao03
| 2024-03-06T22:57:56Z | 4 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T22:53:56Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** jiayihao03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
cmu-lti/sotopia-pi
|
cmu-lti
| 2024-03-06T22:56:21Z | 8 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-06T21:11:01Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
chanchan7/zephyr-7b-dpo-qlora
|
chanchan7
| 2024-03-06T22:55:58Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-04T16:49:18Z |
---
license: apache-2.0
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: zephyr-7b-dpo-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-qlora
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-qlora](https://huggingface.co/alignment-handbook/zephyr-7b-sft-qlora) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4880
- Rewards/chosen: -2.8615
- Rewards/rejected: -3.9313
- Rewards/accuracies: 0.7262
- Rewards/margins: 1.0698
- Logps/rejected: -626.2534
- Logps/chosen: -549.3907
- Logits/rejected: 1.3412
- Logits/chosen: 0.7713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 3
- gradient_accumulation_steps: 4
- total_train_batch_size: 12
- total_eval_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6884 | 0.02 | 100 | 0.6868 | 0.0390 | 0.0284 | 0.6146 | 0.0106 | -230.2779 | -259.3362 | -2.3476 | -2.3366 |
| 0.6654 | 0.04 | 200 | 0.6657 | 0.0334 | -0.0194 | 0.6399 | 0.0528 | -235.0622 | -259.9052 | -2.2635 | -2.2585 |
| 0.6346 | 0.06 | 300 | 0.6431 | -0.2564 | -0.3692 | 0.6533 | 0.1128 | -270.0399 | -288.8787 | -2.2107 | -2.2217 |
| 0.5888 | 0.08 | 400 | 0.6162 | -0.4195 | -0.6312 | 0.6518 | 0.2118 | -296.2420 | -305.1884 | -1.9579 | -1.9905 |
| 0.5806 | 0.1 | 500 | 0.5916 | -1.3171 | -1.6507 | 0.6637 | 0.3337 | -398.1920 | -394.9468 | -0.4990 | -0.5253 |
| 0.6219 | 0.12 | 600 | 0.5753 | -1.1344 | -1.5063 | 0.6503 | 0.3719 | -383.7478 | -376.6808 | 0.0384 | -0.0361 |
| 0.5586 | 0.14 | 700 | 0.5733 | -0.7892 | -1.1878 | 0.6667 | 0.3986 | -351.8957 | -342.1609 | 0.3073 | 0.2473 |
| 0.6123 | 0.16 | 800 | 0.5578 | -1.2731 | -1.7042 | 0.6652 | 0.4311 | -403.5397 | -390.5542 | 1.0809 | 1.0327 |
| 0.555 | 0.18 | 900 | 0.5461 | -1.1941 | -1.8087 | 0.6771 | 0.6146 | -413.9875 | -382.6491 | 1.4158 | 1.3993 |
| 0.4905 | 0.2 | 1000 | 0.5463 | -1.2469 | -1.9528 | 0.6890 | 0.7058 | -428.3945 | -387.9334 | 0.8211 | 0.7732 |
| 0.5214 | 0.22 | 1100 | 0.5356 | -1.2786 | -1.8992 | 0.6979 | 0.6206 | -423.0347 | -391.1008 | 1.3945 | 1.4163 |
| 0.4988 | 0.24 | 1200 | 0.5307 | -1.2179 | -1.9293 | 0.6979 | 0.7115 | -426.0503 | -385.0261 | 1.0273 | 0.9228 |
| 0.5324 | 0.26 | 1300 | 0.5320 | -1.4512 | -2.1779 | 0.7024 | 0.7267 | -450.9060 | -408.3595 | 0.9344 | 0.5917 |
| 0.5286 | 0.27 | 1400 | 0.5193 | -1.3777 | -2.1412 | 0.7039 | 0.7634 | -447.2371 | -401.0145 | 1.1979 | 0.8244 |
| 0.6095 | 0.29 | 1500 | 0.5206 | -1.1730 | -1.8883 | 0.7009 | 0.7153 | -421.9497 | -380.5422 | 0.3598 | -0.0238 |
| 0.5627 | 0.31 | 1600 | 0.5225 | -1.8811 | -2.7733 | 0.6935 | 0.8922 | -510.4463 | -451.3462 | 0.7395 | 0.4147 |
| 0.5222 | 0.33 | 1700 | 0.5210 | -1.1883 | -1.8477 | 0.7143 | 0.6593 | -417.8853 | -382.0739 | -0.0643 | -0.3844 |
| 0.5163 | 0.35 | 1800 | 0.5219 | -1.1780 | -1.9783 | 0.7247 | 0.8003 | -430.9522 | -381.0428 | 1.3000 | 0.9605 |
| 0.511 | 0.37 | 1900 | 0.5214 | -1.8532 | -2.7395 | 0.7188 | 0.8863 | -507.0662 | -448.5622 | 1.3052 | 0.9550 |
| 0.484 | 0.39 | 2000 | 0.5161 | -1.7800 | -2.6182 | 0.7188 | 0.8382 | -494.9370 | -441.2427 | 1.6339 | 1.3132 |
| 0.4863 | 0.41 | 2100 | 0.5183 | -2.7826 | -3.8427 | 0.7158 | 1.0600 | -617.3857 | -541.5035 | 2.3428 | 2.0461 |
| 0.5233 | 0.43 | 2200 | 0.5115 | -1.7702 | -2.6185 | 0.7173 | 0.8483 | -494.9643 | -440.2580 | 0.9791 | 0.5628 |
| 0.5343 | 0.45 | 2300 | 0.5079 | -1.4313 | -2.2210 | 0.7202 | 0.7897 | -455.2213 | -406.3701 | 1.0255 | 0.5469 |
| 0.5251 | 0.47 | 2400 | 0.5088 | -2.7117 | -3.7995 | 0.7173 | 1.0878 | -613.0708 | -534.4126 | 2.1153 | 1.5133 |
| 0.5104 | 0.49 | 2500 | 0.5006 | -2.9970 | -4.0022 | 0.7202 | 1.0052 | -633.3362 | -562.9377 | 2.2889 | 1.7461 |
| 0.429 | 0.51 | 2600 | 0.5238 | -3.6282 | -4.8032 | 0.7143 | 1.1750 | -713.4386 | -626.0600 | 3.6631 | 3.2827 |
| 0.4255 | 0.53 | 2700 | 0.4993 | -2.4946 | -3.5067 | 0.7188 | 1.0121 | -583.7889 | -512.7010 | 2.1920 | 1.6873 |
| 0.4733 | 0.55 | 2800 | 0.4990 | -3.2116 | -4.2800 | 0.7202 | 1.0684 | -661.1174 | -584.3987 | 2.6796 | 2.2111 |
| 0.5394 | 0.57 | 2900 | 0.5040 | -2.9132 | -3.9276 | 0.7158 | 1.0143 | -625.8766 | -554.5653 | 1.7758 | 1.2351 |
| 0.5128 | 0.59 | 3000 | 0.5061 | -2.5974 | -3.5725 | 0.7173 | 0.9750 | -590.3638 | -522.9818 | 2.1284 | 1.6663 |
| 0.5215 | 0.61 | 3100 | 0.4960 | -2.2632 | -3.1876 | 0.7188 | 0.9245 | -551.8787 | -489.5560 | 1.4432 | 0.8594 |
| 0.5023 | 0.63 | 3200 | 0.4999 | -2.8630 | -3.9641 | 0.7128 | 1.1011 | -629.5237 | -549.5392 | 1.9057 | 1.2951 |
| 0.5042 | 0.65 | 3300 | 0.4904 | -2.8448 | -3.8793 | 0.7307 | 1.0345 | -621.0500 | -547.7245 | 1.9776 | 1.4334 |
| 0.498 | 0.67 | 3400 | 0.4879 | -2.8423 | -3.8097 | 0.7321 | 0.9673 | -614.0843 | -547.4754 | 1.4781 | 0.9608 |
| 0.4987 | 0.69 | 3500 | 0.4902 | -2.6926 | -3.7172 | 0.7307 | 1.0246 | -604.8372 | -532.4977 | 1.3819 | 0.8557 |
| 0.5824 | 0.71 | 3600 | 0.4908 | -2.5673 | -3.5933 | 0.7292 | 1.0260 | -592.4445 | -519.9661 | 1.1037 | 0.5336 |
| 0.425 | 0.73 | 3700 | 0.4906 | -2.7666 | -3.8246 | 0.7307 | 1.0580 | -615.5826 | -539.9020 | 1.2903 | 0.7257 |
| 0.4756 | 0.75 | 3800 | 0.4916 | -2.8732 | -3.9598 | 0.7292 | 1.0866 | -629.0961 | -550.5607 | 1.5015 | 0.9387 |
| 0.4597 | 0.77 | 3900 | 0.4896 | -2.8617 | -3.9425 | 0.7277 | 1.0808 | -627.3712 | -549.4086 | 1.3350 | 0.7636 |
| 0.4649 | 0.79 | 4000 | 0.4885 | -2.8682 | -3.9370 | 0.7232 | 1.0688 | -626.8230 | -550.0615 | 1.2903 | 0.7213 |
| 0.4689 | 0.8 | 4100 | 0.4880 | -2.8425 | -3.9060 | 0.7232 | 1.0634 | -623.7166 | -547.4950 | 1.2495 | 0.6763 |
| 0.4275 | 0.82 | 4200 | 0.4877 | -2.8671 | -3.9353 | 0.7232 | 1.0682 | -626.6478 | -549.9532 | 1.3067 | 0.7331 |
| 0.5325 | 0.84 | 4300 | 0.4881 | -2.8855 | -3.9630 | 0.7262 | 1.0775 | -629.4202 | -551.7905 | 1.3795 | 0.8070 |
| 0.532 | 0.86 | 4400 | 0.4881 | -2.8672 | -3.9406 | 0.7277 | 1.0734 | -627.1785 | -549.9610 | 1.3435 | 0.7732 |
| 0.4558 | 0.88 | 4500 | 0.4879 | -2.8560 | -3.9259 | 0.7262 | 1.0699 | -625.7067 | -548.8392 | 1.3411 | 0.7711 |
| 0.5541 | 0.9 | 4600 | 0.4882 | -2.8601 | -3.9295 | 0.7262 | 1.0694 | -626.0704 | -549.2481 | 1.3428 | 0.7729 |
| 0.5743 | 0.92 | 4700 | 0.4879 | -2.8641 | -3.9344 | 0.7262 | 1.0702 | -626.5551 | -549.6526 | 1.3445 | 0.7755 |
| 0.4657 | 0.94 | 4800 | 0.4880 | -2.8626 | -3.9322 | 0.7292 | 1.0696 | -626.3386 | -549.4993 | 1.3437 | 0.7749 |
| 0.5126 | 0.96 | 4900 | 0.4880 | -2.8636 | -3.9339 | 0.7277 | 1.0703 | -626.5126 | -549.6042 | 1.3440 | 0.7748 |
| 0.3967 | 0.98 | 5000 | 0.4880 | -2.8643 | -3.9344 | 0.7262 | 1.0702 | -626.5614 | -549.6658 | 1.3424 | 0.7736 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.2.1+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
ZennyKenny/UNet2DModel-NatalieDiffusion
|
ZennyKenny
| 2024-03-06T22:52:27Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-03-05T20:42:55Z |
---
license: mit
---
# UNet2DModel-NatalieDiffusion
## Model Summary and Intended Use
NatalieDiffusion is a series of finetunes of [UNet2DModel](https://huggingface.co/docs/diffusers/v0.26.3/en/api/models/unet2d#diffusers.UNet2DModel) to aid a [particular graphic artist](https://www.behance.net/nataliKav) in quickly generating meaningful mock-ups and similar draft content for her work on an ongoing project.
## A Word About Ethics
There has been a lot of meaningful conversation about the implications of Computer Vision on the artistic world. Hopefully, this model demonstrates that much like engineers can now use Generate Software Engineering (GSE) techniques to optimize and improve their own workflows, so too, can members of the artistic community use Computer Vision to automate rote tasks such as mock-up and draft generation.
When used ethnically and transparently, AI offers no greater threat to the artistic community than it does to the world of programming because success in both domains skews heavily in favor of the creative.
## Notebooks
Training notebooks are made available as they are completed:
- [Unconditional Training](unconditional-training-noteboook.ipynb)
-
|
SavorSauce/music_genres_classification-finetuned-gtzan
|
SavorSauce
| 2024-03-06T22:50:24Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:dima806/music_genres_classification",
"base_model:finetune:dima806/music_genres_classification",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-06T22:30:51Z |
---
license: apache-2.0
base_model: dima806/music_genres_classification
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: music_genres_classification-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.88
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# music_genres_classification-finetuned-gtzan
This model is a fine-tuned version of [dima806/music_genres_classification](https://huggingface.co/dima806/music_genres_classification) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5964
- Accuracy: 0.88
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.12
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8263 | 1.0 | 180 | 1.8672 | 0.53 |
| 1.5124 | 2.0 | 360 | 1.7102 | 0.45 |
| 1.0715 | 3.0 | 540 | 1.1957 | 0.69 |
| 1.0454 | 4.0 | 720 | 1.5712 | 0.68 |
| 0.3365 | 5.0 | 900 | 0.9891 | 0.81 |
| 0.3502 | 6.0 | 1080 | 1.2261 | 0.74 |
| 1.2326 | 7.0 | 1260 | 1.1571 | 0.77 |
| 0.5868 | 8.0 | 1440 | 0.7691 | 0.87 |
| 0.2718 | 9.0 | 1620 | 0.6720 | 0.88 |
| 0.1625 | 10.0 | 1800 | 0.3927 | 0.93 |
| 0.2519 | 11.0 | 1980 | 0.5140 | 0.91 |
| 0.0701 | 12.0 | 2160 | 0.5964 | 0.88 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
panos-span/Pixelcopter-PLE-v1
|
panos-span
| 2024-03-06T22:43:53Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T16:34:56Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 20.40 +/- 15.16
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
sumedhuv/newmodel
|
sumedhuv
| 2024-03-06T22:40:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T22:39:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abacusai/Liberated-Qwen1.5-72B-c1000
|
abacusai
| 2024-03-06T22:38:54Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/Code-Feedback",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:abacusai/SystemChat",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T19:57:48Z |
---
language:
- en
license: other
datasets:
- teknium/OpenHermes-2.5
- m-a-p/Code-Feedback
- m-a-p/CodeFeedback-Filtered-Instruction
- abacusai/SystemChat
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/Qwen1.5-72B/blob/main/LICENSE
---
<img href="https://abacus.ai" src="https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/pf4d6FA7DriRtVq5HCkxd.png" width="600" />
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/xCWGByXr8YNwGxKVh_x9H.png" width="600" />
# Liberated-Qwen1.5-72B Checkpoint 1000
Please see (Liberated-Qwen1.5-72B)[https://huggingface.co/abacusai/Liberated-Qwen1.5-72B] for complete details on this model.
This is the same model at checkpoint 1000 which was evaluated on MT Bench. The results of the evaluation are in the model card for the main model.
|
tanyagoyal-p/mistral-7b-dpo-full
|
tanyagoyal-p
| 2024-03-06T22:35:38Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-04T22:29:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jiayihao03/mistral7b_instruct_code_C_4bit
|
jiayihao03
| 2024-03-06T22:35:17Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-06T22:32:26Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** jiayihao03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ryankim0709/idefics-9b-YBT-Scores
|
ryankim0709
| 2024-03-06T22:33:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics-9b",
"base_model:adapter:HuggingFaceM4/idefics-9b",
"license:other",
"region:us"
] | null | 2024-03-06T22:25:00Z |
---
license: other
library_name: peft
tags:
- generated_from_trainer
base_model: HuggingFaceM4/idefics-9b
model-index:
- name: idefics-9b-YBT-Scores
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# idefics-9b-YBT-Scores
This model is a fine-tuned version of [HuggingFaceM4/idefics-9b](https://huggingface.co/HuggingFaceM4/idefics-9b) on a dataset of [ybalancetest](https://huggingface.co/datasets/ryankim0709/ybalancetest).
It achieves the following results on the evaluation set:
- Loss: 2.9246
## Model description
VLM to assess y balance test
## Intended uses & limitations
This is trained only for visual QA on y balance test.
## Training and evaluation data
It is based on a dataset [ybalancetest](https://huggingface.co/datasets/ryankim0709/ybalancetest)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1393 | 2.76 | 10 | 2.0003 |
| 1.4136 | 5.52 | 20 | 1.5828 |
| 1.1321 | 8.28 | 30 | 1.5916 |
| 0.8633 | 11.03 | 40 | 1.6502 |
| 0.6091 | 13.79 | 50 | 1.8128 |
| 0.406 | 16.55 | 60 | 2.0350 |
| 0.2218 | 19.31 | 70 | 2.3489 |
| 0.1255 | 22.07 | 80 | 2.6919 |
| 0.0711 | 24.83 | 90 | 2.8418 |
| 0.0606 | 27.59 | 100 | 2.9246 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SavorSauce/distilhubert-finetuned-gtzan-2
|
SavorSauce
| 2024-03-06T22:27:54Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-06T22:12:24Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-2
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.86
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-2
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7203
- Accuracy: 0.86
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2521 | 1.0 | 90 | 2.2219 | 0.3 |
| 1.8502 | 2.0 | 180 | 1.8299 | 0.54 |
| 1.4155 | 3.0 | 270 | 1.4247 | 0.64 |
| 0.9885 | 4.0 | 360 | 1.0313 | 0.7 |
| 0.8111 | 5.0 | 450 | 0.8535 | 0.78 |
| 0.7023 | 6.0 | 540 | 0.7743 | 0.79 |
| 0.5663 | 7.0 | 630 | 0.6618 | 0.81 |
| 0.3577 | 8.0 | 720 | 0.6937 | 0.77 |
| 0.3003 | 9.0 | 810 | 0.6107 | 0.82 |
| 0.1321 | 10.0 | 900 | 0.5648 | 0.81 |
| 0.0488 | 11.0 | 990 | 0.5655 | 0.84 |
| 0.0323 | 12.0 | 1080 | 0.5612 | 0.86 |
| 0.0154 | 13.0 | 1170 | 0.6338 | 0.85 |
| 0.0108 | 14.0 | 1260 | 0.7292 | 0.84 |
| 0.0082 | 15.0 | 1350 | 0.7542 | 0.84 |
| 0.0065 | 16.0 | 1440 | 0.7123 | 0.86 |
| 0.0062 | 17.0 | 1530 | 0.6949 | 0.86 |
| 0.0848 | 18.0 | 1620 | 0.7332 | 0.85 |
| 0.0053 | 19.0 | 1710 | 0.7291 | 0.85 |
| 0.005 | 20.0 | 1800 | 0.7203 | 0.86 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Yuan274/whale-lora-2
|
Yuan274
| 2024-03-06T22:26:20Z | 3 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-06T22:26:17Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: drone view of <s0><s1> in the ocean
output:
url: image-0.png
- text: drone view of <s0><s1> in the ocean
output:
url: image-1.png
- text: drone view of <s0><s1> in the ocean
output:
url: image-2.png
- text: drone view of <s0><s1> in the ocean
output:
url: image-3.png
- text: drone view of <s0><s1> in the ocean
output:
url: image-4.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: drone view of <s0><s1> in the ocean
license: openrail++
---
# SDXL LoRA DreamBooth - Yuan274/whale-lora-2
<Gallery />
## Model description
### These are Yuan274/whale-lora-2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`whale-lora-2.safetensors` here 💾](/Yuan274/whale-lora-2/blob/main/whale-lora-2.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:whale-lora-2:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`whale-lora-2_emb.safetensors` here 💾](/Yuan274/whale-lora-2/blob/main/whale-lora-2_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `whale-lora-2_emb` to your prompt. For example, `drone view of whale-lora-2_emb in the ocean`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Yuan274/whale-lora-2', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='Yuan274/whale-lora-2', filename='whale-lora-2_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
image = pipeline('drone view of <s0><s1> in the ocean').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/Yuan274/whale-lora-2/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
SavorSauce/distilhubert-finetuned-gtzan
|
SavorSauce
| 2024-03-06T22:11:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-06T02:31:07Z |
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.8
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6498
- Accuracy: 0.8
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.992 | 1.0 | 113 | 1.9058 | 0.5 |
| 1.2074 | 2.0 | 226 | 1.2927 | 0.68 |
| 1.0224 | 3.0 | 339 | 1.0371 | 0.74 |
| 0.7185 | 4.0 | 452 | 0.8546 | 0.75 |
| 0.5399 | 5.0 | 565 | 0.7516 | 0.78 |
| 0.3032 | 6.0 | 678 | 0.6308 | 0.79 |
| 0.3264 | 7.0 | 791 | 0.6263 | 0.79 |
| 0.1369 | 8.0 | 904 | 0.6699 | 0.79 |
| 0.2099 | 9.0 | 1017 | 0.6283 | 0.81 |
| 0.1101 | 10.0 | 1130 | 0.6498 | 0.8 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
biololab/tinyllama-spanish_16bit
|
biololab
| 2024-03-06T22:05:37Z | 50 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T22:04:30Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** biololab
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
OwOOwO/exp5
|
OwOOwO
| 2024-03-06T22:04:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T22:01:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BecTome/PPO-LunarLander-v2
|
BecTome
| 2024-03-06T22:03:14Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T22:02:53Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.52 +/- 25.50
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sweetfelinity/dqn-SpaceInvadersNoFrameskip-v4
|
sweetfelinity
| 2024-03-06T21:59:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T21:58:45Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 654.00 +/- 235.14
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sweetfelinity -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sweetfelinity -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sweetfelinity
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
marxirpe/kapesnik
|
marxirpe
| 2024-03-06T21:46:09Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-cascade",
"base_model:adapter:stabilityai/stable-cascade",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2024-03-06T21:46:07Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/obrázek_2024-03-06_224533522.png
base_model: stabilityai/stable-cascade
instance_prompt: null
license: apache-2.0
---
# kapesnik
<Gallery />
## Download model
[Download](/marxirpe/kapesnik/tree/main) them in the Files & versions tab.
|
crumb/apricot-wildflower-20
|
crumb
| 2024-03-06T21:45:39Z | 1,507 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-19T20:09:54Z |
---
license: apache-2.0
model-index:
- name: apricot-wildflower-20
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 59.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 81.76
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.76
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.9
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 33.97
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=crumb/apricot-wildflower-20
name: Open LLM Leaderboard
---
# apricot-wildflower-20
This model is the Mistral-7b model finetuned for 1k steps with a combined lm loss and distillation loss on Openwebtext2 with a >=20 reddit score filter with training logits from Mixtral. I'm not going to pretend it was a big project I did it in a dream and woke up and replicated the code without any actual reason, idk how well it fares in benchmarks.
(update: not very good)
| model | avg | arc | hellaswag | mmlu | truthfulqa | winogrande | gsm8k |
| --- | --- | --- | --- | --- | --- | --- | --- |
| apricot-wildflower-20 | 59.74 | 59.64 | 81.76 | 63.38 | 41.76 | 77.9 | 33.97 |
| mistralai/Mistral-7B-v0.1 | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
### use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "crumb/apricot-wildflower-20"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, low_cpu_mem_usage=True, device_map="auto", load_in_8bit=True)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Hello my name is Katie and I am a 20 year old student from the UK. I am currently studying for a degree in English Literature and Creative Writing at the University of Leeds. I am a huge fan of the Harry Potter series and have been since I was 10 years old. I have read the books countless times and have seen the films many times too. I am a huge fan of the Harry Potter fandom and have been a member of the Harry Potter forums for a few years now. I am also a member of the Harry Potter fan club and have been for a few years now. I
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_crumb__apricot-wildflower-20)
| Metric |Value|
|---------------------------------|----:|
|Avg. |59.74|
|AI2 Reasoning Challenge (25-Shot)|59.64|
|HellaSwag (10-Shot) |81.76|
|MMLU (5-Shot) |63.38|
|TruthfulQA (0-shot) |41.76|
|Winogrande (5-shot) |77.90|
|GSM8k (5-shot) |33.97|
|
macarious/torgo_xlsr_finetune_M01
|
macarious
| 2024-03-06T21:44:32Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-06T15:14:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: torgo_xlsr_finetune_M01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# torgo_xlsr_finetune_M01
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3034
- Wer: 0.2292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4693 | 0.85 | 1000 | 3.2808 | 1.0 |
| 1.4742 | 1.7 | 2000 | 1.3840 | 0.7581 |
| 0.7802 | 2.55 | 3000 | 1.2332 | 0.5535 |
| 0.5771 | 3.4 | 4000 | 1.3305 | 0.4423 |
| 0.4685 | 4.25 | 5000 | 1.2289 | 0.4032 |
| 0.4235 | 5.1 | 6000 | 1.3615 | 0.3540 |
| 0.3593 | 5.95 | 7000 | 1.1796 | 0.3311 |
| 0.3319 | 6.8 | 8000 | 1.2863 | 0.3336 |
| 0.298 | 7.65 | 9000 | 1.2067 | 0.3022 |
| 0.2729 | 8.5 | 10000 | 1.5681 | 0.3090 |
| 0.24 | 9.35 | 11000 | 1.3628 | 0.3022 |
| 0.2104 | 10.2 | 12000 | 1.6944 | 0.3022 |
| 0.2285 | 11.05 | 13000 | 1.6160 | 0.2997 |
| 0.2027 | 11.89 | 14000 | 1.6614 | 0.3081 |
| 0.2013 | 12.74 | 15000 | 1.3976 | 0.2683 |
| 0.1945 | 13.59 | 16000 | 1.0957 | 0.2317 |
| 0.1644 | 14.44 | 17000 | 1.4140 | 0.2699 |
| 0.163 | 15.29 | 18000 | 1.2615 | 0.2436 |
| 0.1414 | 16.14 | 19000 | 1.4278 | 0.2640 |
| 0.1476 | 16.99 | 20000 | 1.3421 | 0.2360 |
| 0.1415 | 17.84 | 21000 | 1.3527 | 0.2402 |
| 0.1217 | 18.69 | 22000 | 1.3593 | 0.2377 |
| 0.1353 | 19.54 | 23000 | 1.3034 | 0.2292 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
|
abdulrahman-nuzha/belal-finetuned-llama2-v1.0
|
abdulrahman-nuzha
| 2024-03-06T21:40:51Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"en",
"dataset:squad_v2",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2023-12-01T18:46:43Z |
---
language:
- en
license: apache-2.0
library_name: peft
datasets:
- squad_v2
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: belal-finetuned-llama2-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 77.75
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.51
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 39.09
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 74.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 10.69
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abdulrahman-nuzha/belal-finetuned-llama2-v1.0
name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abdulrahman-nuzha__belal-finetuned-llama2-v1.0)
| Metric |Value|
|---------------------------------|----:|
|Avg. |49.70|
|AI2 Reasoning Challenge (25-Shot)|52.82|
|HellaSwag (10-Shot) |77.75|
|MMLU (5-Shot) |43.51|
|TruthfulQA (0-shot) |39.09|
|Winogrande (5-shot) |74.35|
|GSM8k (5-shot) |10.69|
|
Litzy619/V0305P6
|
Litzy619
| 2024-03-06T21:35:37Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-06T12:23:04Z |
---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305P6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305P6
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7001 | 0.09 | 10 | 0.5244 |
| 0.2134 | 0.17 | 20 | 0.1572 |
| 0.1574 | 0.26 | 30 | 0.1549 |
| 0.1522 | 0.34 | 40 | 0.1488 |
| 0.1501 | 0.43 | 50 | 0.1488 |
| 0.1553 | 0.51 | 60 | 0.1484 |
| 0.1482 | 0.6 | 70 | 0.1376 |
| 0.144 | 0.68 | 80 | 0.1298 |
| 0.131 | 0.77 | 90 | 0.1147 |
| 0.1268 | 0.85 | 100 | 0.1112 |
| 0.1196 | 0.94 | 110 | 0.0988 |
| 0.115 | 1.02 | 120 | 0.1008 |
| 0.1083 | 1.11 | 130 | 0.0982 |
| 0.102 | 1.19 | 140 | 0.0943 |
| 0.0984 | 1.28 | 150 | 0.0875 |
| 0.0964 | 1.37 | 160 | 0.0853 |
| 0.0953 | 1.45 | 170 | 0.0855 |
| 0.0888 | 1.54 | 180 | 0.0825 |
| 0.089 | 1.62 | 190 | 0.0839 |
| 0.0955 | 1.71 | 200 | 0.0811 |
| 0.094 | 1.79 | 210 | 0.0784 |
| 0.0901 | 1.88 | 220 | 0.0729 |
| 0.0856 | 1.96 | 230 | 0.0771 |
| 0.0717 | 2.05 | 240 | 0.0744 |
| 0.0648 | 2.13 | 250 | 0.0730 |
| 0.061 | 2.22 | 260 | 0.0720 |
| 0.0589 | 2.3 | 270 | 0.0759 |
| 0.0664 | 2.39 | 280 | 0.0702 |
| 0.0676 | 2.47 | 290 | 0.0693 |
| 0.0636 | 2.56 | 300 | 0.0699 |
| 0.0667 | 2.65 | 310 | 0.0711 |
| 0.0585 | 2.73 | 320 | 0.0726 |
| 0.0619 | 2.82 | 330 | 0.0732 |
| 0.0613 | 2.9 | 340 | 0.0735 |
| 0.0611 | 2.99 | 350 | 0.0736 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
ai-agi/neural-zephyr
|
ai-agi
| 2024-03-06T21:28:52Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"serialization",
"conversational",
"en",
"arxiv:2305.18290",
"arxiv:2310.16944",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-21T23:44:19Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- serialization
---

# Model Card for Neural-Zephyr Mistral 14B
Intel and Hugging Face developed two of the most prominent Mistral-type models released: Neural-Chat and Zephyr.
Neural-Zephyr is a hybrid Transfer Learning version joining Neural-Chat weights and Zephyr Mistral type models. The weights are aggregated in the same layers, summing up 14B parameters.
Zephyr is a series of language models that are trained to act as helpful assistants.
Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
that was trained on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so.
You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
## Model description
- **Model type:** A 14B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
## Use in Transformers
**Load model directly** \
import torch \
from transformers import AutoTokenizer, AutoModelForCausalLM, MistralForCausalLM \
from huggingface_hub import hf_hub_download
model = MistralForCausalLM.from_pretrained("ai-agi/neural-zephyr", use_cache=False, torch_dtype=torch.bfloat16, device_map="auto") \
model_weights = hf_hub_download(repo_id="ai-agi/neural-zephyr", filename="model_weights.pth") \
state_dict = torch.load(model_weights) \
model.load_state_dict(state_dict)
tokenizer = AutoTokenizer.from_pretrained("ai-agi/neural-zephyr", use_fast=True) \
if tokenizer.pad_token is None: \
tokenizer.pad_token = tokenizer.eos_token \
**Manage your GPU/CPU memory for model and weights**
|
aniket23/LeftOver
|
aniket23
| 2024-03-06T21:26:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-03-06T21:23:57Z |
---
title: LeftOver
emoji: 🐨
colorFrom: blue
colorTo: green
sdk: streamlit
sdk_version: 1.31.1
app_file: app.py
pinned: false
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
Onlydrinkwater/gpt2xl_format_math_520_7base
|
Onlydrinkwater
| 2024-03-06T21:21:38Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T21:14:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Maqqq/OpenHermes-2.5-Mistral-7B-3
|
Maqqq
| 2024-03-06T21:19:43Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T15:48:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nm-testing/tiny_starcoder_py-quant
|
nm-testing
| 2024-03-06T21:18:02Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T21:15:58Z |
```
pip install sparseml-nightly[llm]==1.7.0.20240304
sparseml.transformers.text_generation.oneshot --model bigcode/tiny_starcoder_py --dataset open_platypus --recipe recipe.yaml --output_dir ./obcq_deployment
huggingface-cli upload nm-testing/tiny_starcoder_py-quant obcq_deployment/
```
|
rohiladora/lora-trained-xl-donjulio
|
rohiladora
| 2024-03-06T21:09:06Z | 3 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-03-06T19:04:58Z |
---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of <donjulioblanco> bottle
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - rohiladora/lora-trained-xl-donjulio
<Gallery />
## Model description
These are rohiladora/lora-trained-xl-donjulio LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of <donjulioblanco> bottle to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](rohiladora/lora-trained-xl-donjulio/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
sarak7/H4_36_769_v1
|
sarak7
| 2024-03-06T21:02:32Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T21:00:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.