modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
farid1088/GQA_BERT_German_legal_SQuAD_2000
|
farid1088
| 2024-03-07T20:17:43Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T18:26:42Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_German_legal_SQuAD_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_German_legal_SQuAD_2000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1765
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.4104 |
| No log | 2.0 | 4 | 4.3755 |
| No log | 3.0 | 6 | 3.8375 |
| No log | 4.0 | 8 | 3.4004 |
| No log | 5.0 | 10 | 2.9899 |
| No log | 6.0 | 12 | 2.6185 |
| No log | 7.0 | 14 | 2.2836 |
| No log | 8.0 | 16 | 2.0170 |
| No log | 9.0 | 18 | 1.7777 |
| No log | 10.0 | 20 | 1.5673 |
| No log | 11.0 | 22 | 1.4034 |
| No log | 12.0 | 24 | 1.2563 |
| No log | 13.0 | 26 | 1.1298 |
| No log | 14.0 | 28 | 1.0538 |
| No log | 15.0 | 30 | 0.9918 |
| No log | 16.0 | 32 | 0.9477 |
| No log | 17.0 | 34 | 0.9131 |
| No log | 18.0 | 36 | 0.9065 |
| No log | 19.0 | 38 | 0.9138 |
| No log | 20.0 | 40 | 0.8988 |
| No log | 21.0 | 42 | 0.8951 |
| No log | 22.0 | 44 | 0.9161 |
| No log | 23.0 | 46 | 0.9520 |
| No log | 24.0 | 48 | 0.9669 |
| No log | 25.0 | 50 | 0.9614 |
| No log | 26.0 | 52 | 0.9425 |
| No log | 27.0 | 54 | 0.9260 |
| No log | 28.0 | 56 | 0.9222 |
| No log | 29.0 | 58 | 0.9374 |
| No log | 30.0 | 60 | 0.9696 |
| No log | 31.0 | 62 | 0.9703 |
| No log | 32.0 | 64 | 0.9604 |
| No log | 33.0 | 66 | 0.9545 |
| No log | 34.0 | 68 | 0.9464 |
| No log | 35.0 | 70 | 0.9778 |
| No log | 36.0 | 72 | 1.0221 |
| No log | 37.0 | 74 | 1.0553 |
| No log | 38.0 | 76 | 1.0823 |
| No log | 39.0 | 78 | 1.1064 |
| No log | 40.0 | 80 | 1.1001 |
| No log | 41.0 | 82 | 1.0636 |
| No log | 42.0 | 84 | 1.0258 |
| No log | 43.0 | 86 | 1.0406 |
| No log | 44.0 | 88 | 1.0706 |
| No log | 45.0 | 90 | 1.1007 |
| No log | 46.0 | 92 | 1.1318 |
| No log | 47.0 | 94 | 1.1296 |
| No log | 48.0 | 96 | 1.0914 |
| No log | 49.0 | 98 | 1.0264 |
| No log | 50.0 | 100 | 0.9912 |
| No log | 51.0 | 102 | 0.9708 |
| No log | 52.0 | 104 | 0.9661 |
| No log | 53.0 | 106 | 1.0157 |
| No log | 54.0 | 108 | 1.0737 |
| No log | 55.0 | 110 | 1.1175 |
| No log | 56.0 | 112 | 1.1332 |
| No log | 57.0 | 114 | 1.1019 |
| No log | 58.0 | 116 | 1.0463 |
| No log | 59.0 | 118 | 0.9870 |
| No log | 60.0 | 120 | 0.9701 |
| No log | 61.0 | 122 | 0.9851 |
| No log | 62.0 | 124 | 1.0310 |
| No log | 63.0 | 126 | 1.0629 |
| No log | 64.0 | 128 | 1.0847 |
| No log | 65.0 | 130 | 1.0969 |
| No log | 66.0 | 132 | 1.1080 |
| No log | 67.0 | 134 | 1.1127 |
| No log | 68.0 | 136 | 1.1106 |
| No log | 69.0 | 138 | 1.1019 |
| No log | 70.0 | 140 | 1.1037 |
| No log | 71.0 | 142 | 1.0951 |
| No log | 72.0 | 144 | 1.0664 |
| No log | 73.0 | 146 | 1.0341 |
| No log | 74.0 | 148 | 1.0019 |
| No log | 75.0 | 150 | 1.0038 |
| No log | 76.0 | 152 | 1.0189 |
| No log | 77.0 | 154 | 1.0472 |
| No log | 78.0 | 156 | 1.0636 |
| No log | 79.0 | 158 | 1.0576 |
| No log | 80.0 | 160 | 1.0673 |
| No log | 81.0 | 162 | 1.0625 |
| No log | 82.0 | 164 | 1.0485 |
| No log | 83.0 | 166 | 1.0415 |
| No log | 84.0 | 168 | 1.0597 |
| No log | 85.0 | 170 | 1.0796 |
| No log | 86.0 | 172 | 1.0903 |
| No log | 87.0 | 174 | 1.0905 |
| No log | 88.0 | 176 | 1.0769 |
| No log | 89.0 | 178 | 1.0549 |
| No log | 90.0 | 180 | 1.0413 |
| No log | 91.0 | 182 | 1.0503 |
| No log | 92.0 | 184 | 1.0658 |
| No log | 93.0 | 186 | 1.0616 |
| No log | 94.0 | 188 | 1.0636 |
| No log | 95.0 | 190 | 1.0525 |
| No log | 96.0 | 192 | 1.0297 |
| No log | 97.0 | 194 | 1.0130 |
| No log | 98.0 | 196 | 1.0077 |
| No log | 99.0 | 198 | 1.0196 |
| No log | 100.0 | 200 | 1.0418 |
| No log | 101.0 | 202 | 1.0621 |
| No log | 102.0 | 204 | 1.0737 |
| No log | 103.0 | 206 | 1.0714 |
| No log | 104.0 | 208 | 1.0776 |
| No log | 105.0 | 210 | 1.0692 |
| No log | 106.0 | 212 | 1.0693 |
| No log | 107.0 | 214 | 1.0740 |
| No log | 108.0 | 216 | 1.0730 |
| No log | 109.0 | 218 | 1.0573 |
| No log | 110.0 | 220 | 1.0476 |
| No log | 111.0 | 222 | 1.0598 |
| No log | 112.0 | 224 | 1.0730 |
| No log | 113.0 | 226 | 1.0757 |
| No log | 114.0 | 228 | 1.0735 |
| No log | 115.0 | 230 | 1.0937 |
| No log | 116.0 | 232 | 1.1165 |
| No log | 117.0 | 234 | 1.1177 |
| No log | 118.0 | 236 | 1.1094 |
| No log | 119.0 | 238 | 1.0878 |
| No log | 120.0 | 240 | 1.0693 |
| No log | 121.0 | 242 | 1.0644 |
| No log | 122.0 | 244 | 1.0564 |
| No log | 123.0 | 246 | 1.0484 |
| No log | 124.0 | 248 | 1.0383 |
| No log | 125.0 | 250 | 1.0359 |
| No log | 126.0 | 252 | 1.0719 |
| No log | 127.0 | 254 | 1.1024 |
| No log | 128.0 | 256 | 1.1000 |
| No log | 129.0 | 258 | 1.1098 |
| No log | 130.0 | 260 | 1.1148 |
| No log | 131.0 | 262 | 1.1099 |
| No log | 132.0 | 264 | 1.0871 |
| No log | 133.0 | 266 | 1.0714 |
| No log | 134.0 | 268 | 1.0524 |
| No log | 135.0 | 270 | 1.0408 |
| No log | 136.0 | 272 | 1.0388 |
| No log | 137.0 | 274 | 1.0481 |
| No log | 138.0 | 276 | 1.0514 |
| No log | 139.0 | 278 | 1.0457 |
| No log | 140.0 | 280 | 1.0376 |
| No log | 141.0 | 282 | 1.0347 |
| No log | 142.0 | 284 | 1.0286 |
| No log | 143.0 | 286 | 1.0392 |
| No log | 144.0 | 288 | 1.0626 |
| No log | 145.0 | 290 | 1.0935 |
| No log | 146.0 | 292 | 1.1031 |
| No log | 147.0 | 294 | 1.1218 |
| No log | 148.0 | 296 | 1.1417 |
| No log | 149.0 | 298 | 1.1460 |
| No log | 150.0 | 300 | 1.1303 |
| No log | 151.0 | 302 | 1.1026 |
| No log | 152.0 | 304 | 1.0870 |
| No log | 153.0 | 306 | 1.0891 |
| No log | 154.0 | 308 | 1.0935 |
| No log | 155.0 | 310 | 1.0832 |
| No log | 156.0 | 312 | 1.0674 |
| No log | 157.0 | 314 | 1.0468 |
| No log | 158.0 | 316 | 1.0353 |
| No log | 159.0 | 318 | 1.0361 |
| No log | 160.0 | 320 | 1.0585 |
| No log | 161.0 | 322 | 1.0828 |
| No log | 162.0 | 324 | 1.0944 |
| No log | 163.0 | 326 | 1.1013 |
| No log | 164.0 | 328 | 1.0925 |
| No log | 165.0 | 330 | 1.0779 |
| No log | 166.0 | 332 | 1.0566 |
| No log | 167.0 | 334 | 1.0382 |
| No log | 168.0 | 336 | 1.0354 |
| No log | 169.0 | 338 | 1.0560 |
| No log | 170.0 | 340 | 1.0823 |
| No log | 171.0 | 342 | 1.1059 |
| No log | 172.0 | 344 | 1.1307 |
| No log | 173.0 | 346 | 1.1385 |
| No log | 174.0 | 348 | 1.1315 |
| No log | 175.0 | 350 | 1.1213 |
| No log | 176.0 | 352 | 1.0984 |
| No log | 177.0 | 354 | 1.0691 |
| No log | 178.0 | 356 | 1.0427 |
| No log | 179.0 | 358 | 1.0279 |
| No log | 180.0 | 360 | 1.0153 |
| No log | 181.0 | 362 | 1.0028 |
| No log | 182.0 | 364 | 0.9902 |
| No log | 183.0 | 366 | 0.9820 |
| No log | 184.0 | 368 | 0.9917 |
| No log | 185.0 | 370 | 1.0080 |
| No log | 186.0 | 372 | 1.0296 |
| No log | 187.0 | 374 | 1.0526 |
| No log | 188.0 | 376 | 1.0706 |
| No log | 189.0 | 378 | 1.0693 |
| No log | 190.0 | 380 | 1.0448 |
| No log | 191.0 | 382 | 1.0449 |
| No log | 192.0 | 384 | 1.0386 |
| No log | 193.0 | 386 | 1.0267 |
| No log | 194.0 | 388 | 1.0185 |
| No log | 195.0 | 390 | 1.0379 |
| No log | 196.0 | 392 | 1.0670 |
| No log | 197.0 | 394 | 1.1031 |
| No log | 198.0 | 396 | 1.1522 |
| No log | 199.0 | 398 | 1.1903 |
| No log | 200.0 | 400 | 1.1907 |
| No log | 201.0 | 402 | 1.1490 |
| No log | 202.0 | 404 | 1.0990 |
| No log | 203.0 | 406 | 1.0487 |
| No log | 204.0 | 408 | 1.0177 |
| No log | 205.0 | 410 | 0.9967 |
| No log | 206.0 | 412 | 1.0033 |
| No log | 207.0 | 414 | 1.0289 |
| No log | 208.0 | 416 | 1.0499 |
| No log | 209.0 | 418 | 1.1461 |
| No log | 210.0 | 420 | 1.2037 |
| No log | 211.0 | 422 | 1.2032 |
| No log | 212.0 | 424 | 1.1546 |
| No log | 213.0 | 426 | 1.0863 |
| No log | 214.0 | 428 | 1.0477 |
| No log | 215.0 | 430 | 1.0285 |
| No log | 216.0 | 432 | 1.0164 |
| No log | 217.0 | 434 | 1.0022 |
| No log | 218.0 | 436 | 1.0188 |
| No log | 219.0 | 438 | 1.0863 |
| No log | 220.0 | 440 | 1.1806 |
| No log | 221.0 | 442 | 1.1640 |
| No log | 222.0 | 444 | 1.1038 |
| No log | 223.0 | 446 | 1.0997 |
| No log | 224.0 | 448 | 1.1057 |
| No log | 225.0 | 450 | 1.1073 |
| No log | 226.0 | 452 | 1.0999 |
| No log | 227.0 | 454 | 1.0873 |
| No log | 228.0 | 456 | 1.0711 |
| No log | 229.0 | 458 | 1.0629 |
| No log | 230.0 | 460 | 1.0690 |
| No log | 231.0 | 462 | 1.0740 |
| No log | 232.0 | 464 | 1.0807 |
| No log | 233.0 | 466 | 1.0751 |
| No log | 234.0 | 468 | 1.0603 |
| No log | 235.0 | 470 | 1.0435 |
| No log | 236.0 | 472 | 1.0437 |
| No log | 237.0 | 474 | 1.0487 |
| No log | 238.0 | 476 | 1.0548 |
| No log | 239.0 | 478 | 1.0587 |
| No log | 240.0 | 480 | 1.0561 |
| No log | 241.0 | 482 | 1.0617 |
| No log | 242.0 | 484 | 1.0528 |
| No log | 243.0 | 486 | 1.0466 |
| No log | 244.0 | 488 | 1.0586 |
| No log | 245.0 | 490 | 1.0757 |
| No log | 246.0 | 492 | 1.0801 |
| No log | 247.0 | 494 | 1.0707 |
| No log | 248.0 | 496 | 1.0595 |
| No log | 249.0 | 498 | 1.0623 |
| 0.5922 | 250.0 | 500 | 1.1042 |
| 0.5922 | 251.0 | 502 | 1.1355 |
| 0.5922 | 252.0 | 504 | 1.1485 |
| 0.5922 | 253.0 | 506 | 1.1474 |
| 0.5922 | 254.0 | 508 | 1.1430 |
| 0.5922 | 255.0 | 510 | 1.1356 |
| 0.5922 | 256.0 | 512 | 1.1247 |
| 0.5922 | 257.0 | 514 | 1.1202 |
| 0.5922 | 258.0 | 516 | 1.1274 |
| 0.5922 | 259.0 | 518 | 1.1533 |
| 0.5922 | 260.0 | 520 | 1.1922 |
| 0.5922 | 261.0 | 522 | 1.2005 |
| 0.5922 | 262.0 | 524 | 1.1545 |
| 0.5922 | 263.0 | 526 | 1.1399 |
| 0.5922 | 264.0 | 528 | 1.1310 |
| 0.5922 | 265.0 | 530 | 1.1135 |
| 0.5922 | 266.0 | 532 | 1.0999 |
| 0.5922 | 267.0 | 534 | 1.0811 |
| 0.5922 | 268.0 | 536 | 1.0788 |
| 0.5922 | 269.0 | 538 | 1.0726 |
| 0.5922 | 270.0 | 540 | 1.0605 |
| 0.5922 | 271.0 | 542 | 1.0634 |
| 0.5922 | 272.0 | 544 | 1.0738 |
| 0.5922 | 273.0 | 546 | 1.0793 |
| 0.5922 | 274.0 | 548 | 1.0855 |
| 0.5922 | 275.0 | 550 | 1.1032 |
| 0.5922 | 276.0 | 552 | 1.1056 |
| 0.5922 | 277.0 | 554 | 1.0985 |
| 0.5922 | 278.0 | 556 | 1.1000 |
| 0.5922 | 279.0 | 558 | 1.0888 |
| 0.5922 | 280.0 | 560 | 1.0638 |
| 0.5922 | 281.0 | 562 | 1.0319 |
| 0.5922 | 282.0 | 564 | 1.0054 |
| 0.5922 | 283.0 | 566 | 0.9904 |
| 0.5922 | 284.0 | 568 | 0.9816 |
| 0.5922 | 285.0 | 570 | 0.9823 |
| 0.5922 | 286.0 | 572 | 0.9940 |
| 0.5922 | 287.0 | 574 | 1.0440 |
| 0.5922 | 288.0 | 576 | 1.0786 |
| 0.5922 | 289.0 | 578 | 1.0955 |
| 0.5922 | 290.0 | 580 | 1.1019 |
| 0.5922 | 291.0 | 582 | 1.1052 |
| 0.5922 | 292.0 | 584 | 1.0964 |
| 0.5922 | 293.0 | 586 | 1.0807 |
| 0.5922 | 294.0 | 588 | 1.0619 |
| 0.5922 | 295.0 | 590 | 1.0467 |
| 0.5922 | 296.0 | 592 | 1.0304 |
| 0.5922 | 297.0 | 594 | 1.0267 |
| 0.5922 | 298.0 | 596 | 1.0341 |
| 0.5922 | 299.0 | 598 | 1.0457 |
| 0.5922 | 300.0 | 600 | 1.0669 |
| 0.5922 | 301.0 | 602 | 1.1006 |
| 0.5922 | 302.0 | 604 | 1.1248 |
| 0.5922 | 303.0 | 606 | 1.1403 |
| 0.5922 | 304.0 | 608 | 1.1456 |
| 0.5922 | 305.0 | 610 | 1.1374 |
| 0.5922 | 306.0 | 612 | 1.1352 |
| 0.5922 | 307.0 | 614 | 1.1282 |
| 0.5922 | 308.0 | 616 | 1.1164 |
| 0.5922 | 309.0 | 618 | 1.1067 |
| 0.5922 | 310.0 | 620 | 1.1046 |
| 0.5922 | 311.0 | 622 | 1.0876 |
| 0.5922 | 312.0 | 624 | 1.0570 |
| 0.5922 | 313.0 | 626 | 1.0376 |
| 0.5922 | 314.0 | 628 | 1.0298 |
| 0.5922 | 315.0 | 630 | 1.0233 |
| 0.5922 | 316.0 | 632 | 1.0232 |
| 0.5922 | 317.0 | 634 | 1.0071 |
| 0.5922 | 318.0 | 636 | 0.9817 |
| 0.5922 | 319.0 | 638 | 0.9613 |
| 0.5922 | 320.0 | 640 | 0.9502 |
| 0.5922 | 321.0 | 642 | 0.9391 |
| 0.5922 | 322.0 | 644 | 0.9310 |
| 0.5922 | 323.0 | 646 | 0.9392 |
| 0.5922 | 324.0 | 648 | 0.9716 |
| 0.5922 | 325.0 | 650 | 1.0411 |
| 0.5922 | 326.0 | 652 | 1.0763 |
| 0.5922 | 327.0 | 654 | 1.1032 |
| 0.5922 | 328.0 | 656 | 1.1147 |
| 0.5922 | 329.0 | 658 | 1.1127 |
| 0.5922 | 330.0 | 660 | 1.0998 |
| 0.5922 | 331.0 | 662 | 1.0851 |
| 0.5922 | 332.0 | 664 | 1.0711 |
| 0.5922 | 333.0 | 666 | 1.0465 |
| 0.5922 | 334.0 | 668 | 1.0709 |
| 0.5922 | 335.0 | 670 | 1.1121 |
| 0.5922 | 336.0 | 672 | 1.1420 |
| 0.5922 | 337.0 | 674 | 1.1513 |
| 0.5922 | 338.0 | 676 | 1.1332 |
| 0.5922 | 339.0 | 678 | 1.0967 |
| 0.5922 | 340.0 | 680 | 1.0633 |
| 0.5922 | 341.0 | 682 | 1.0343 |
| 0.5922 | 342.0 | 684 | 1.0101 |
| 0.5922 | 343.0 | 686 | 0.9974 |
| 0.5922 | 344.0 | 688 | 0.9935 |
| 0.5922 | 345.0 | 690 | 0.9833 |
| 0.5922 | 346.0 | 692 | 0.9780 |
| 0.5922 | 347.0 | 694 | 0.9772 |
| 0.5922 | 348.0 | 696 | 0.9735 |
| 0.5922 | 349.0 | 698 | 0.9927 |
| 0.5922 | 350.0 | 700 | 1.0140 |
| 0.5922 | 351.0 | 702 | 1.0339 |
| 0.5922 | 352.0 | 704 | 1.0592 |
| 0.5922 | 353.0 | 706 | 1.0895 |
| 0.5922 | 354.0 | 708 | 1.1115 |
| 0.5922 | 355.0 | 710 | 1.1255 |
| 0.5922 | 356.0 | 712 | 1.1197 |
| 0.5922 | 357.0 | 714 | 1.1046 |
| 0.5922 | 358.0 | 716 | 1.0874 |
| 0.5922 | 359.0 | 718 | 1.0719 |
| 0.5922 | 360.0 | 720 | 1.0543 |
| 0.5922 | 361.0 | 722 | 1.0325 |
| 0.5922 | 362.0 | 724 | 1.0357 |
| 0.5922 | 363.0 | 726 | 1.0583 |
| 0.5922 | 364.0 | 728 | 1.0982 |
| 0.5922 | 365.0 | 730 | 1.1298 |
| 0.5922 | 366.0 | 732 | 1.1546 |
| 0.5922 | 367.0 | 734 | 1.1771 |
| 0.5922 | 368.0 | 736 | 1.1959 |
| 0.5922 | 369.0 | 738 | 1.2079 |
| 0.5922 | 370.0 | 740 | 1.2083 |
| 0.5922 | 371.0 | 742 | 1.2056 |
| 0.5922 | 372.0 | 744 | 1.1969 |
| 0.5922 | 373.0 | 746 | 1.1700 |
| 0.5922 | 374.0 | 748 | 1.1318 |
| 0.5922 | 375.0 | 750 | 1.1137 |
| 0.5922 | 376.0 | 752 | 1.1051 |
| 0.5922 | 377.0 | 754 | 1.0996 |
| 0.5922 | 378.0 | 756 | 1.0910 |
| 0.5922 | 379.0 | 758 | 1.1206 |
| 0.5922 | 380.0 | 760 | 1.1865 |
| 0.5922 | 381.0 | 762 | 1.2224 |
| 0.5922 | 382.0 | 764 | 1.2323 |
| 0.5922 | 383.0 | 766 | 1.2291 |
| 0.5922 | 384.0 | 768 | 1.2127 |
| 0.5922 | 385.0 | 770 | 1.1816 |
| 0.5922 | 386.0 | 772 | 1.1450 |
| 0.5922 | 387.0 | 774 | 1.1099 |
| 0.5922 | 388.0 | 776 | 1.0854 |
| 0.5922 | 389.0 | 778 | 1.0664 |
| 0.5922 | 390.0 | 780 | 1.0537 |
| 0.5922 | 391.0 | 782 | 1.0378 |
| 0.5922 | 392.0 | 784 | 1.0227 |
| 0.5922 | 393.0 | 786 | 1.0214 |
| 0.5922 | 394.0 | 788 | 1.0457 |
| 0.5922 | 395.0 | 790 | 1.0652 |
| 0.5922 | 396.0 | 792 | 1.0884 |
| 0.5922 | 397.0 | 794 | 1.1097 |
| 0.5922 | 398.0 | 796 | 1.1279 |
| 0.5922 | 399.0 | 798 | 1.1429 |
| 0.5922 | 400.0 | 800 | 1.1543 |
| 0.5922 | 401.0 | 802 | 1.1682 |
| 0.5922 | 402.0 | 804 | 1.1680 |
| 0.5922 | 403.0 | 806 | 1.1697 |
| 0.5922 | 404.0 | 808 | 1.1759 |
| 0.5922 | 405.0 | 810 | 1.1683 |
| 0.5922 | 406.0 | 812 | 1.1430 |
| 0.5922 | 407.0 | 814 | 1.1220 |
| 0.5922 | 408.0 | 816 | 1.1067 |
| 0.5922 | 409.0 | 818 | 1.1371 |
| 0.5922 | 410.0 | 820 | 1.1943 |
| 0.5922 | 411.0 | 822 | 1.2259 |
| 0.5922 | 412.0 | 824 | 1.2529 |
| 0.5922 | 413.0 | 826 | 1.2653 |
| 0.5922 | 414.0 | 828 | 1.2664 |
| 0.5922 | 415.0 | 830 | 1.2554 |
| 0.5922 | 416.0 | 832 | 1.2271 |
| 0.5922 | 417.0 | 834 | 1.2015 |
| 0.5922 | 418.0 | 836 | 1.1842 |
| 0.5922 | 419.0 | 838 | 1.1639 |
| 0.5922 | 420.0 | 840 | 1.1448 |
| 0.5922 | 421.0 | 842 | 1.1411 |
| 0.5922 | 422.0 | 844 | 1.1379 |
| 0.5922 | 423.0 | 846 | 1.1448 |
| 0.5922 | 424.0 | 848 | 1.1606 |
| 0.5922 | 425.0 | 850 | 1.1723 |
| 0.5922 | 426.0 | 852 | 1.2103 |
| 0.5922 | 427.0 | 854 | 1.2394 |
| 0.5922 | 428.0 | 856 | 1.2567 |
| 0.5922 | 429.0 | 858 | 1.2704 |
| 0.5922 | 430.0 | 860 | 1.2687 |
| 0.5922 | 431.0 | 862 | 1.2494 |
| 0.5922 | 432.0 | 864 | 1.2231 |
| 0.5922 | 433.0 | 866 | 1.2072 |
| 0.5922 | 434.0 | 868 | 1.1994 |
| 0.5922 | 435.0 | 870 | 1.1929 |
| 0.5922 | 436.0 | 872 | 1.1871 |
| 0.5922 | 437.0 | 874 | 1.1758 |
| 0.5922 | 438.0 | 876 | 1.1707 |
| 0.5922 | 439.0 | 878 | 1.1635 |
| 0.5922 | 440.0 | 880 | 1.1581 |
| 0.5922 | 441.0 | 882 | 1.1608 |
| 0.5922 | 442.0 | 884 | 1.1681 |
| 0.5922 | 443.0 | 886 | 1.1710 |
| 0.5922 | 444.0 | 888 | 1.1688 |
| 0.5922 | 445.0 | 890 | 1.1689 |
| 0.5922 | 446.0 | 892 | 1.1672 |
| 0.5922 | 447.0 | 894 | 1.1641 |
| 0.5922 | 448.0 | 896 | 1.1580 |
| 0.5922 | 449.0 | 898 | 1.1488 |
| 0.5922 | 450.0 | 900 | 1.1370 |
| 0.5922 | 451.0 | 902 | 1.1322 |
| 0.5922 | 452.0 | 904 | 1.1352 |
| 0.5922 | 453.0 | 906 | 1.1399 |
| 0.5922 | 454.0 | 908 | 1.1368 |
| 0.5922 | 455.0 | 910 | 1.1380 |
| 0.5922 | 456.0 | 912 | 1.1368 |
| 0.5922 | 457.0 | 914 | 1.1349 |
| 0.5922 | 458.0 | 916 | 1.1194 |
| 0.5922 | 459.0 | 918 | 1.1126 |
| 0.5922 | 460.0 | 920 | 1.1184 |
| 0.5922 | 461.0 | 922 | 1.1241 |
| 0.5922 | 462.0 | 924 | 1.1284 |
| 0.5922 | 463.0 | 926 | 1.1191 |
| 0.5922 | 464.0 | 928 | 1.1098 |
| 0.5922 | 465.0 | 930 | 1.1040 |
| 0.5922 | 466.0 | 932 | 1.0989 |
| 0.5922 | 467.0 | 934 | 1.0962 |
| 0.5922 | 468.0 | 936 | 1.1049 |
| 0.5922 | 469.0 | 938 | 1.1080 |
| 0.5922 | 470.0 | 940 | 1.1227 |
| 0.5922 | 471.0 | 942 | 1.1301 |
| 0.5922 | 472.0 | 944 | 1.1379 |
| 0.5922 | 473.0 | 946 | 1.1362 |
| 0.5922 | 474.0 | 948 | 1.1271 |
| 0.5922 | 475.0 | 950 | 1.1103 |
| 0.5922 | 476.0 | 952 | 1.0923 |
| 0.5922 | 477.0 | 954 | 1.0745 |
| 0.5922 | 478.0 | 956 | 1.0581 |
| 0.5922 | 479.0 | 958 | 1.0397 |
| 0.5922 | 480.0 | 960 | 1.0357 |
| 0.5922 | 481.0 | 962 | 1.0505 |
| 0.5922 | 482.0 | 964 | 1.0725 |
| 0.5922 | 483.0 | 966 | 1.0977 |
| 0.5922 | 484.0 | 968 | 1.1207 |
| 0.5922 | 485.0 | 970 | 1.1334 |
| 0.5922 | 486.0 | 972 | 1.1481 |
| 0.5922 | 487.0 | 974 | 1.1606 |
| 0.5922 | 488.0 | 976 | 1.1760 |
| 0.5922 | 489.0 | 978 | 1.1934 |
| 0.5922 | 490.0 | 980 | 1.2112 |
| 0.5922 | 491.0 | 982 | 1.2274 |
| 0.5922 | 492.0 | 984 | 1.2373 |
| 0.5922 | 493.0 | 986 | 1.2420 |
| 0.5922 | 494.0 | 988 | 1.2405 |
| 0.5922 | 495.0 | 990 | 1.2362 |
| 0.5922 | 496.0 | 992 | 1.2291 |
| 0.5922 | 497.0 | 994 | 1.2229 |
| 0.5922 | 498.0 | 996 | 1.2180 |
| 0.5922 | 499.0 | 998 | 1.2064 |
| 0.4792 | 500.0 | 1000 | 1.1870 |
| 0.4792 | 501.0 | 1002 | 1.1701 |
| 0.4792 | 502.0 | 1004 | 1.1521 |
| 0.4792 | 503.0 | 1006 | 1.1342 |
| 0.4792 | 504.0 | 1008 | 1.1211 |
| 0.4792 | 505.0 | 1010 | 1.1333 |
| 0.4792 | 506.0 | 1012 | 1.1748 |
| 0.4792 | 507.0 | 1014 | 1.2205 |
| 0.4792 | 508.0 | 1016 | 1.2448 |
| 0.4792 | 509.0 | 1018 | 1.2668 |
| 0.4792 | 510.0 | 1020 | 1.2806 |
| 0.4792 | 511.0 | 1022 | 1.2785 |
| 0.4792 | 512.0 | 1024 | 1.2667 |
| 0.4792 | 513.0 | 1026 | 1.2533 |
| 0.4792 | 514.0 | 1028 | 1.2393 |
| 0.4792 | 515.0 | 1030 | 1.2307 |
| 0.4792 | 516.0 | 1032 | 1.2121 |
| 0.4792 | 517.0 | 1034 | 1.1944 |
| 0.4792 | 518.0 | 1036 | 1.1826 |
| 0.4792 | 519.0 | 1038 | 1.1760 |
| 0.4792 | 520.0 | 1040 | 1.1693 |
| 0.4792 | 521.0 | 1042 | 1.1549 |
| 0.4792 | 522.0 | 1044 | 1.1443 |
| 0.4792 | 523.0 | 1046 | 1.1357 |
| 0.4792 | 524.0 | 1048 | 1.1093 |
| 0.4792 | 525.0 | 1050 | 1.0910 |
| 0.4792 | 526.0 | 1052 | 1.0887 |
| 0.4792 | 527.0 | 1054 | 1.0907 |
| 0.4792 | 528.0 | 1056 | 1.0936 |
| 0.4792 | 529.0 | 1058 | 1.1114 |
| 0.4792 | 530.0 | 1060 | 1.1261 |
| 0.4792 | 531.0 | 1062 | 1.1339 |
| 0.4792 | 532.0 | 1064 | 1.1357 |
| 0.4792 | 533.0 | 1066 | 1.1362 |
| 0.4792 | 534.0 | 1068 | 1.1376 |
| 0.4792 | 535.0 | 1070 | 1.1411 |
| 0.4792 | 536.0 | 1072 | 1.1442 |
| 0.4792 | 537.0 | 1074 | 1.1465 |
| 0.4792 | 538.0 | 1076 | 1.1502 |
| 0.4792 | 539.0 | 1078 | 1.1565 |
| 0.4792 | 540.0 | 1080 | 1.1621 |
| 0.4792 | 541.0 | 1082 | 1.1633 |
| 0.4792 | 542.0 | 1084 | 1.1583 |
| 0.4792 | 543.0 | 1086 | 1.1549 |
| 0.4792 | 544.0 | 1088 | 1.1556 |
| 0.4792 | 545.0 | 1090 | 1.1581 |
| 0.4792 | 546.0 | 1092 | 1.1593 |
| 0.4792 | 547.0 | 1094 | 1.1534 |
| 0.4792 | 548.0 | 1096 | 1.1464 |
| 0.4792 | 549.0 | 1098 | 1.1383 |
| 0.4792 | 550.0 | 1100 | 1.1354 |
| 0.4792 | 551.0 | 1102 | 1.1375 |
| 0.4792 | 552.0 | 1104 | 1.1415 |
| 0.4792 | 553.0 | 1106 | 1.1410 |
| 0.4792 | 554.0 | 1108 | 1.1455 |
| 0.4792 | 555.0 | 1110 | 1.1758 |
| 0.4792 | 556.0 | 1112 | 1.2052 |
| 0.4792 | 557.0 | 1114 | 1.2301 |
| 0.4792 | 558.0 | 1116 | 1.2503 |
| 0.4792 | 559.0 | 1118 | 1.2638 |
| 0.4792 | 560.0 | 1120 | 1.2686 |
| 0.4792 | 561.0 | 1122 | 1.2690 |
| 0.4792 | 562.0 | 1124 | 1.2661 |
| 0.4792 | 563.0 | 1126 | 1.2470 |
| 0.4792 | 564.0 | 1128 | 1.2317 |
| 0.4792 | 565.0 | 1130 | 1.2235 |
| 0.4792 | 566.0 | 1132 | 1.2167 |
| 0.4792 | 567.0 | 1134 | 1.2083 |
| 0.4792 | 568.0 | 1136 | 1.2027 |
| 0.4792 | 569.0 | 1138 | 1.1978 |
| 0.4792 | 570.0 | 1140 | 1.1935 |
| 0.4792 | 571.0 | 1142 | 1.1916 |
| 0.4792 | 572.0 | 1144 | 1.1881 |
| 0.4792 | 573.0 | 1146 | 1.1847 |
| 0.4792 | 574.0 | 1148 | 1.1838 |
| 0.4792 | 575.0 | 1150 | 1.1814 |
| 0.4792 | 576.0 | 1152 | 1.1799 |
| 0.4792 | 577.0 | 1154 | 1.1795 |
| 0.4792 | 578.0 | 1156 | 1.1814 |
| 0.4792 | 579.0 | 1158 | 1.1812 |
| 0.4792 | 580.0 | 1160 | 1.1826 |
| 0.4792 | 581.0 | 1162 | 1.1829 |
| 0.4792 | 582.0 | 1164 | 1.1802 |
| 0.4792 | 583.0 | 1166 | 1.1759 |
| 0.4792 | 584.0 | 1168 | 1.1783 |
| 0.4792 | 585.0 | 1170 | 1.1777 |
| 0.4792 | 586.0 | 1172 | 1.1752 |
| 0.4792 | 587.0 | 1174 | 1.1729 |
| 0.4792 | 588.0 | 1176 | 1.1714 |
| 0.4792 | 589.0 | 1178 | 1.1687 |
| 0.4792 | 590.0 | 1180 | 1.1626 |
| 0.4792 | 591.0 | 1182 | 1.1565 |
| 0.4792 | 592.0 | 1184 | 1.1533 |
| 0.4792 | 593.0 | 1186 | 1.1463 |
| 0.4792 | 594.0 | 1188 | 1.1385 |
| 0.4792 | 595.0 | 1190 | 1.1307 |
| 0.4792 | 596.0 | 1192 | 1.1245 |
| 0.4792 | 597.0 | 1194 | 1.1206 |
| 0.4792 | 598.0 | 1196 | 1.1181 |
| 0.4792 | 599.0 | 1198 | 1.1170 |
| 0.4792 | 600.0 | 1200 | 1.1166 |
| 0.4792 | 601.0 | 1202 | 1.1185 |
| 0.4792 | 602.0 | 1204 | 1.1217 |
| 0.4792 | 603.0 | 1206 | 1.1234 |
| 0.4792 | 604.0 | 1208 | 1.1267 |
| 0.4792 | 605.0 | 1210 | 1.1343 |
| 0.4792 | 606.0 | 1212 | 1.1440 |
| 0.4792 | 607.0 | 1214 | 1.1514 |
| 0.4792 | 608.0 | 1216 | 1.1583 |
| 0.4792 | 609.0 | 1218 | 1.1636 |
| 0.4792 | 610.0 | 1220 | 1.1648 |
| 0.4792 | 611.0 | 1222 | 1.1679 |
| 0.4792 | 612.0 | 1224 | 1.1714 |
| 0.4792 | 613.0 | 1226 | 1.1775 |
| 0.4792 | 614.0 | 1228 | 1.1814 |
| 0.4792 | 615.0 | 1230 | 1.1858 |
| 0.4792 | 616.0 | 1232 | 1.1863 |
| 0.4792 | 617.0 | 1234 | 1.1827 |
| 0.4792 | 618.0 | 1236 | 1.1771 |
| 0.4792 | 619.0 | 1238 | 1.1700 |
| 0.4792 | 620.0 | 1240 | 1.1630 |
| 0.4792 | 621.0 | 1242 | 1.1596 |
| 0.4792 | 622.0 | 1244 | 1.1547 |
| 0.4792 | 623.0 | 1246 | 1.1527 |
| 0.4792 | 624.0 | 1248 | 1.1498 |
| 0.4792 | 625.0 | 1250 | 1.1506 |
| 0.4792 | 626.0 | 1252 | 1.1567 |
| 0.4792 | 627.0 | 1254 | 1.1570 |
| 0.4792 | 628.0 | 1256 | 1.1602 |
| 0.4792 | 629.0 | 1258 | 1.1665 |
| 0.4792 | 630.0 | 1260 | 1.1751 |
| 0.4792 | 631.0 | 1262 | 1.1787 |
| 0.4792 | 632.0 | 1264 | 1.1836 |
| 0.4792 | 633.0 | 1266 | 1.1835 |
| 0.4792 | 634.0 | 1268 | 1.1883 |
| 0.4792 | 635.0 | 1270 | 1.1999 |
| 0.4792 | 636.0 | 1272 | 1.2062 |
| 0.4792 | 637.0 | 1274 | 1.2010 |
| 0.4792 | 638.0 | 1276 | 1.1945 |
| 0.4792 | 639.0 | 1278 | 1.1911 |
| 0.4792 | 640.0 | 1280 | 1.1834 |
| 0.4792 | 641.0 | 1282 | 1.1767 |
| 0.4792 | 642.0 | 1284 | 1.1713 |
| 0.4792 | 643.0 | 1286 | 1.1658 |
| 0.4792 | 644.0 | 1288 | 1.1588 |
| 0.4792 | 645.0 | 1290 | 1.1481 |
| 0.4792 | 646.0 | 1292 | 1.1397 |
| 0.4792 | 647.0 | 1294 | 1.1327 |
| 0.4792 | 648.0 | 1296 | 1.1300 |
| 0.4792 | 649.0 | 1298 | 1.1335 |
| 0.4792 | 650.0 | 1300 | 1.1377 |
| 0.4792 | 651.0 | 1302 | 1.1401 |
| 0.4792 | 652.0 | 1304 | 1.1380 |
| 0.4792 | 653.0 | 1306 | 1.1439 |
| 0.4792 | 654.0 | 1308 | 1.1491 |
| 0.4792 | 655.0 | 1310 | 1.1540 |
| 0.4792 | 656.0 | 1312 | 1.1587 |
| 0.4792 | 657.0 | 1314 | 1.1655 |
| 0.4792 | 658.0 | 1316 | 1.1701 |
| 0.4792 | 659.0 | 1318 | 1.1704 |
| 0.4792 | 660.0 | 1320 | 1.1683 |
| 0.4792 | 661.0 | 1322 | 1.1714 |
| 0.4792 | 662.0 | 1324 | 1.1762 |
| 0.4792 | 663.0 | 1326 | 1.1827 |
| 0.4792 | 664.0 | 1328 | 1.1873 |
| 0.4792 | 665.0 | 1330 | 1.1880 |
| 0.4792 | 666.0 | 1332 | 1.1914 |
| 0.4792 | 667.0 | 1334 | 1.1900 |
| 0.4792 | 668.0 | 1336 | 1.1913 |
| 0.4792 | 669.0 | 1338 | 1.1942 |
| 0.4792 | 670.0 | 1340 | 1.1936 |
| 0.4792 | 671.0 | 1342 | 1.1963 |
| 0.4792 | 672.0 | 1344 | 1.1999 |
| 0.4792 | 673.0 | 1346 | 1.2057 |
| 0.4792 | 674.0 | 1348 | 1.2091 |
| 0.4792 | 675.0 | 1350 | 1.2129 |
| 0.4792 | 676.0 | 1352 | 1.2183 |
| 0.4792 | 677.0 | 1354 | 1.2206 |
| 0.4792 | 678.0 | 1356 | 1.2175 |
| 0.4792 | 679.0 | 1358 | 1.2180 |
| 0.4792 | 680.0 | 1360 | 1.2199 |
| 0.4792 | 681.0 | 1362 | 1.2218 |
| 0.4792 | 682.0 | 1364 | 1.2194 |
| 0.4792 | 683.0 | 1366 | 1.2076 |
| 0.4792 | 684.0 | 1368 | 1.2016 |
| 0.4792 | 685.0 | 1370 | 1.1956 |
| 0.4792 | 686.0 | 1372 | 1.1919 |
| 0.4792 | 687.0 | 1374 | 1.1818 |
| 0.4792 | 688.0 | 1376 | 1.1701 |
| 0.4792 | 689.0 | 1378 | 1.1524 |
| 0.4792 | 690.0 | 1380 | 1.1407 |
| 0.4792 | 691.0 | 1382 | 1.1433 |
| 0.4792 | 692.0 | 1384 | 1.1523 |
| 0.4792 | 693.0 | 1386 | 1.1662 |
| 0.4792 | 694.0 | 1388 | 1.1731 |
| 0.4792 | 695.0 | 1390 | 1.1810 |
| 0.4792 | 696.0 | 1392 | 1.1882 |
| 0.4792 | 697.0 | 1394 | 1.1950 |
| 0.4792 | 698.0 | 1396 | 1.1971 |
| 0.4792 | 699.0 | 1398 | 1.1951 |
| 0.4792 | 700.0 | 1400 | 1.1928 |
| 0.4792 | 701.0 | 1402 | 1.1901 |
| 0.4792 | 702.0 | 1404 | 1.1929 |
| 0.4792 | 703.0 | 1406 | 1.2222 |
| 0.4792 | 704.0 | 1408 | 1.2495 |
| 0.4792 | 705.0 | 1410 | 1.2651 |
| 0.4792 | 706.0 | 1412 | 1.2712 |
| 0.4792 | 707.0 | 1414 | 1.2724 |
| 0.4792 | 708.0 | 1416 | 1.2727 |
| 0.4792 | 709.0 | 1418 | 1.2684 |
| 0.4792 | 710.0 | 1420 | 1.2544 |
| 0.4792 | 711.0 | 1422 | 1.2324 |
| 0.4792 | 712.0 | 1424 | 1.2100 |
| 0.4792 | 713.0 | 1426 | 1.1854 |
| 0.4792 | 714.0 | 1428 | 1.1615 |
| 0.4792 | 715.0 | 1430 | 1.1443 |
| 0.4792 | 716.0 | 1432 | 1.1367 |
| 0.4792 | 717.0 | 1434 | 1.1342 |
| 0.4792 | 718.0 | 1436 | 1.1248 |
| 0.4792 | 719.0 | 1438 | 1.1283 |
| 0.4792 | 720.0 | 1440 | 1.1349 |
| 0.4792 | 721.0 | 1442 | 1.1442 |
| 0.4792 | 722.0 | 1444 | 1.1646 |
| 0.4792 | 723.0 | 1446 | 1.1874 |
| 0.4792 | 724.0 | 1448 | 1.2006 |
| 0.4792 | 725.0 | 1450 | 1.2026 |
| 0.4792 | 726.0 | 1452 | 1.1950 |
| 0.4792 | 727.0 | 1454 | 1.1846 |
| 0.4792 | 728.0 | 1456 | 1.1780 |
| 0.4792 | 729.0 | 1458 | 1.1795 |
| 0.4792 | 730.0 | 1460 | 1.1899 |
| 0.4792 | 731.0 | 1462 | 1.1999 |
| 0.4792 | 732.0 | 1464 | 1.2046 |
| 0.4792 | 733.0 | 1466 | 1.2094 |
| 0.4792 | 734.0 | 1468 | 1.2189 |
| 0.4792 | 735.0 | 1470 | 1.2279 |
| 0.4792 | 736.0 | 1472 | 1.2344 |
| 0.4792 | 737.0 | 1474 | 1.2449 |
| 0.4792 | 738.0 | 1476 | 1.2554 |
| 0.4792 | 739.0 | 1478 | 1.2613 |
| 0.4792 | 740.0 | 1480 | 1.2638 |
| 0.4792 | 741.0 | 1482 | 1.2618 |
| 0.4792 | 742.0 | 1484 | 1.2537 |
| 0.4792 | 743.0 | 1486 | 1.2429 |
| 0.4792 | 744.0 | 1488 | 1.2339 |
| 0.4792 | 745.0 | 1490 | 1.2282 |
| 0.4792 | 746.0 | 1492 | 1.2234 |
| 0.4792 | 747.0 | 1494 | 1.2199 |
| 0.4792 | 748.0 | 1496 | 1.2163 |
| 0.4792 | 749.0 | 1498 | 1.2115 |
| 0.4781 | 750.0 | 1500 | 1.2059 |
| 0.4781 | 751.0 | 1502 | 1.2001 |
| 0.4781 | 752.0 | 1504 | 1.1934 |
| 0.4781 | 753.0 | 1506 | 1.1857 |
| 0.4781 | 754.0 | 1508 | 1.1805 |
| 0.4781 | 755.0 | 1510 | 1.1772 |
| 0.4781 | 756.0 | 1512 | 1.1799 |
| 0.4781 | 757.0 | 1514 | 1.1866 |
| 0.4781 | 758.0 | 1516 | 1.1904 |
| 0.4781 | 759.0 | 1518 | 1.1973 |
| 0.4781 | 760.0 | 1520 | 1.2044 |
| 0.4781 | 761.0 | 1522 | 1.2101 |
| 0.4781 | 762.0 | 1524 | 1.2166 |
| 0.4781 | 763.0 | 1526 | 1.2223 |
| 0.4781 | 764.0 | 1528 | 1.2249 |
| 0.4781 | 765.0 | 1530 | 1.2234 |
| 0.4781 | 766.0 | 1532 | 1.2183 |
| 0.4781 | 767.0 | 1534 | 1.2077 |
| 0.4781 | 768.0 | 1536 | 1.1955 |
| 0.4781 | 769.0 | 1538 | 1.1845 |
| 0.4781 | 770.0 | 1540 | 1.1760 |
| 0.4781 | 771.0 | 1542 | 1.1666 |
| 0.4781 | 772.0 | 1544 | 1.1566 |
| 0.4781 | 773.0 | 1546 | 1.1473 |
| 0.4781 | 774.0 | 1548 | 1.1564 |
| 0.4781 | 775.0 | 1550 | 1.1868 |
| 0.4781 | 776.0 | 1552 | 1.2229 |
| 0.4781 | 777.0 | 1554 | 1.2565 |
| 0.4781 | 778.0 | 1556 | 1.2804 |
| 0.4781 | 779.0 | 1558 | 1.2965 |
| 0.4781 | 780.0 | 1560 | 1.3019 |
| 0.4781 | 781.0 | 1562 | 1.2980 |
| 0.4781 | 782.0 | 1564 | 1.2800 |
| 0.4781 | 783.0 | 1566 | 1.2584 |
| 0.4781 | 784.0 | 1568 | 1.2354 |
| 0.4781 | 785.0 | 1570 | 1.2070 |
| 0.4781 | 786.0 | 1572 | 1.1817 |
| 0.4781 | 787.0 | 1574 | 1.1501 |
| 0.4781 | 788.0 | 1576 | 1.1280 |
| 0.4781 | 789.0 | 1578 | 1.1070 |
| 0.4781 | 790.0 | 1580 | 1.0882 |
| 0.4781 | 791.0 | 1582 | 1.0766 |
| 0.4781 | 792.0 | 1584 | 1.0695 |
| 0.4781 | 793.0 | 1586 | 1.0647 |
| 0.4781 | 794.0 | 1588 | 1.0601 |
| 0.4781 | 795.0 | 1590 | 1.0702 |
| 0.4781 | 796.0 | 1592 | 1.0913 |
| 0.4781 | 797.0 | 1594 | 1.1163 |
| 0.4781 | 798.0 | 1596 | 1.1317 |
| 0.4781 | 799.0 | 1598 | 1.1417 |
| 0.4781 | 800.0 | 1600 | 1.1436 |
| 0.4781 | 801.0 | 1602 | 1.1484 |
| 0.4781 | 802.0 | 1604 | 1.1558 |
| 0.4781 | 803.0 | 1606 | 1.1636 |
| 0.4781 | 804.0 | 1608 | 1.1713 |
| 0.4781 | 805.0 | 1610 | 1.1761 |
| 0.4781 | 806.0 | 1612 | 1.1785 |
| 0.4781 | 807.0 | 1614 | 1.1844 |
| 0.4781 | 808.0 | 1616 | 1.1894 |
| 0.4781 | 809.0 | 1618 | 1.1895 |
| 0.4781 | 810.0 | 1620 | 1.1920 |
| 0.4781 | 811.0 | 1622 | 1.1922 |
| 0.4781 | 812.0 | 1624 | 1.1919 |
| 0.4781 | 813.0 | 1626 | 1.1927 |
| 0.4781 | 814.0 | 1628 | 1.1932 |
| 0.4781 | 815.0 | 1630 | 1.1914 |
| 0.4781 | 816.0 | 1632 | 1.1825 |
| 0.4781 | 817.0 | 1634 | 1.1768 |
| 0.4781 | 818.0 | 1636 | 1.1710 |
| 0.4781 | 819.0 | 1638 | 1.1672 |
| 0.4781 | 820.0 | 1640 | 1.1666 |
| 0.4781 | 821.0 | 1642 | 1.1672 |
| 0.4781 | 822.0 | 1644 | 1.1686 |
| 0.4781 | 823.0 | 1646 | 1.1708 |
| 0.4781 | 824.0 | 1648 | 1.1773 |
| 0.4781 | 825.0 | 1650 | 1.1820 |
| 0.4781 | 826.0 | 1652 | 1.1842 |
| 0.4781 | 827.0 | 1654 | 1.1832 |
| 0.4781 | 828.0 | 1656 | 1.1823 |
| 0.4781 | 829.0 | 1658 | 1.1822 |
| 0.4781 | 830.0 | 1660 | 1.1804 |
| 0.4781 | 831.0 | 1662 | 1.1769 |
| 0.4781 | 832.0 | 1664 | 1.1693 |
| 0.4781 | 833.0 | 1666 | 1.1637 |
| 0.4781 | 834.0 | 1668 | 1.1581 |
| 0.4781 | 835.0 | 1670 | 1.1571 |
| 0.4781 | 836.0 | 1672 | 1.1530 |
| 0.4781 | 837.0 | 1674 | 1.1513 |
| 0.4781 | 838.0 | 1676 | 1.1508 |
| 0.4781 | 839.0 | 1678 | 1.1429 |
| 0.4781 | 840.0 | 1680 | 1.1364 |
| 0.4781 | 841.0 | 1682 | 1.1359 |
| 0.4781 | 842.0 | 1684 | 1.1387 |
| 0.4781 | 843.0 | 1686 | 1.1445 |
| 0.4781 | 844.0 | 1688 | 1.1511 |
| 0.4781 | 845.0 | 1690 | 1.1512 |
| 0.4781 | 846.0 | 1692 | 1.1458 |
| 0.4781 | 847.0 | 1694 | 1.1411 |
| 0.4781 | 848.0 | 1696 | 1.1313 |
| 0.4781 | 849.0 | 1698 | 1.1291 |
| 0.4781 | 850.0 | 1700 | 1.1321 |
| 0.4781 | 851.0 | 1702 | 1.1364 |
| 0.4781 | 852.0 | 1704 | 1.1394 |
| 0.4781 | 853.0 | 1706 | 1.1409 |
| 0.4781 | 854.0 | 1708 | 1.1408 |
| 0.4781 | 855.0 | 1710 | 1.1429 |
| 0.4781 | 856.0 | 1712 | 1.1432 |
| 0.4781 | 857.0 | 1714 | 1.1406 |
| 0.4781 | 858.0 | 1716 | 1.1338 |
| 0.4781 | 859.0 | 1718 | 1.1323 |
| 0.4781 | 860.0 | 1720 | 1.1269 |
| 0.4781 | 861.0 | 1722 | 1.1266 |
| 0.4781 | 862.0 | 1724 | 1.1309 |
| 0.4781 | 863.0 | 1726 | 1.1307 |
| 0.4781 | 864.0 | 1728 | 1.1335 |
| 0.4781 | 865.0 | 1730 | 1.1457 |
| 0.4781 | 866.0 | 1732 | 1.1556 |
| 0.4781 | 867.0 | 1734 | 1.1595 |
| 0.4781 | 868.0 | 1736 | 1.1620 |
| 0.4781 | 869.0 | 1738 | 1.1669 |
| 0.4781 | 870.0 | 1740 | 1.1735 |
| 0.4781 | 871.0 | 1742 | 1.1800 |
| 0.4781 | 872.0 | 1744 | 1.1844 |
| 0.4781 | 873.0 | 1746 | 1.1878 |
| 0.4781 | 874.0 | 1748 | 1.1913 |
| 0.4781 | 875.0 | 1750 | 1.1929 |
| 0.4781 | 876.0 | 1752 | 1.1946 |
| 0.4781 | 877.0 | 1754 | 1.1967 |
| 0.4781 | 878.0 | 1756 | 1.1969 |
| 0.4781 | 879.0 | 1758 | 1.1967 |
| 0.4781 | 880.0 | 1760 | 1.1952 |
| 0.4781 | 881.0 | 1762 | 1.1909 |
| 0.4781 | 882.0 | 1764 | 1.1881 |
| 0.4781 | 883.0 | 1766 | 1.1877 |
| 0.4781 | 884.0 | 1768 | 1.1866 |
| 0.4781 | 885.0 | 1770 | 1.1865 |
| 0.4781 | 886.0 | 1772 | 1.1888 |
| 0.4781 | 887.0 | 1774 | 1.1956 |
| 0.4781 | 888.0 | 1776 | 1.2021 |
| 0.4781 | 889.0 | 1778 | 1.2052 |
| 0.4781 | 890.0 | 1780 | 1.2025 |
| 0.4781 | 891.0 | 1782 | 1.1933 |
| 0.4781 | 892.0 | 1784 | 1.1826 |
| 0.4781 | 893.0 | 1786 | 1.1776 |
| 0.4781 | 894.0 | 1788 | 1.1763 |
| 0.4781 | 895.0 | 1790 | 1.1821 |
| 0.4781 | 896.0 | 1792 | 1.1854 |
| 0.4781 | 897.0 | 1794 | 1.1868 |
| 0.4781 | 898.0 | 1796 | 1.1878 |
| 0.4781 | 899.0 | 1798 | 1.1890 |
| 0.4781 | 900.0 | 1800 | 1.1905 |
| 0.4781 | 901.0 | 1802 | 1.1942 |
| 0.4781 | 902.0 | 1804 | 1.2136 |
| 0.4781 | 903.0 | 1806 | 1.2305 |
| 0.4781 | 904.0 | 1808 | 1.2426 |
| 0.4781 | 905.0 | 1810 | 1.2523 |
| 0.4781 | 906.0 | 1812 | 1.2593 |
| 0.4781 | 907.0 | 1814 | 1.2618 |
| 0.4781 | 908.0 | 1816 | 1.2625 |
| 0.4781 | 909.0 | 1818 | 1.2613 |
| 0.4781 | 910.0 | 1820 | 1.2596 |
| 0.4781 | 911.0 | 1822 | 1.2605 |
| 0.4781 | 912.0 | 1824 | 1.2504 |
| 0.4781 | 913.0 | 1826 | 1.2427 |
| 0.4781 | 914.0 | 1828 | 1.2358 |
| 0.4781 | 915.0 | 1830 | 1.2246 |
| 0.4781 | 916.0 | 1832 | 1.2173 |
| 0.4781 | 917.0 | 1834 | 1.2084 |
| 0.4781 | 918.0 | 1836 | 1.2005 |
| 0.4781 | 919.0 | 1838 | 1.1952 |
| 0.4781 | 920.0 | 1840 | 1.1893 |
| 0.4781 | 921.0 | 1842 | 1.1834 |
| 0.4781 | 922.0 | 1844 | 1.1787 |
| 0.4781 | 923.0 | 1846 | 1.1772 |
| 0.4781 | 924.0 | 1848 | 1.1807 |
| 0.4781 | 925.0 | 1850 | 1.1827 |
| 0.4781 | 926.0 | 1852 | 1.1802 |
| 0.4781 | 927.0 | 1854 | 1.1719 |
| 0.4781 | 928.0 | 1856 | 1.1645 |
| 0.4781 | 929.0 | 1858 | 1.1588 |
| 0.4781 | 930.0 | 1860 | 1.1545 |
| 0.4781 | 931.0 | 1862 | 1.1484 |
| 0.4781 | 932.0 | 1864 | 1.1457 |
| 0.4781 | 933.0 | 1866 | 1.1498 |
| 0.4781 | 934.0 | 1868 | 1.1540 |
| 0.4781 | 935.0 | 1870 | 1.1583 |
| 0.4781 | 936.0 | 1872 | 1.1654 |
| 0.4781 | 937.0 | 1874 | 1.1699 |
| 0.4781 | 938.0 | 1876 | 1.1711 |
| 0.4781 | 939.0 | 1878 | 1.1712 |
| 0.4781 | 940.0 | 1880 | 1.1711 |
| 0.4781 | 941.0 | 1882 | 1.1666 |
| 0.4781 | 942.0 | 1884 | 1.1616 |
| 0.4781 | 943.0 | 1886 | 1.1570 |
| 0.4781 | 944.0 | 1888 | 1.1553 |
| 0.4781 | 945.0 | 1890 | 1.1491 |
| 0.4781 | 946.0 | 1892 | 1.1462 |
| 0.4781 | 947.0 | 1894 | 1.1463 |
| 0.4781 | 948.0 | 1896 | 1.1474 |
| 0.4781 | 949.0 | 1898 | 1.1492 |
| 0.4781 | 950.0 | 1900 | 1.1492 |
| 0.4781 | 951.0 | 1902 | 1.1489 |
| 0.4781 | 952.0 | 1904 | 1.1479 |
| 0.4781 | 953.0 | 1906 | 1.1457 |
| 0.4781 | 954.0 | 1908 | 1.1440 |
| 0.4781 | 955.0 | 1910 | 1.1443 |
| 0.4781 | 956.0 | 1912 | 1.1447 |
| 0.4781 | 957.0 | 1914 | 1.1447 |
| 0.4781 | 958.0 | 1916 | 1.1444 |
| 0.4781 | 959.0 | 1918 | 1.1450 |
| 0.4781 | 960.0 | 1920 | 1.1451 |
| 0.4781 | 961.0 | 1922 | 1.1453 |
| 0.4781 | 962.0 | 1924 | 1.1463 |
| 0.4781 | 963.0 | 1926 | 1.1490 |
| 0.4781 | 964.0 | 1928 | 1.1537 |
| 0.4781 | 965.0 | 1930 | 1.1580 |
| 0.4781 | 966.0 | 1932 | 1.1643 |
| 0.4781 | 967.0 | 1934 | 1.1692 |
| 0.4781 | 968.0 | 1936 | 1.1718 |
| 0.4781 | 969.0 | 1938 | 1.1724 |
| 0.4781 | 970.0 | 1940 | 1.1727 |
| 0.4781 | 971.0 | 1942 | 1.1723 |
| 0.4781 | 972.0 | 1944 | 1.1693 |
| 0.4781 | 973.0 | 1946 | 1.1680 |
| 0.4781 | 974.0 | 1948 | 1.1688 |
| 0.4781 | 975.0 | 1950 | 1.1672 |
| 0.4781 | 976.0 | 1952 | 1.1627 |
| 0.4781 | 977.0 | 1954 | 1.1624 |
| 0.4781 | 978.0 | 1956 | 1.1678 |
| 0.4781 | 979.0 | 1958 | 1.1717 |
| 0.4781 | 980.0 | 1960 | 1.1736 |
| 0.4781 | 981.0 | 1962 | 1.1759 |
| 0.4781 | 982.0 | 1964 | 1.1795 |
| 0.4781 | 983.0 | 1966 | 1.1812 |
| 0.4781 | 984.0 | 1968 | 1.1834 |
| 0.4781 | 985.0 | 1970 | 1.1980 |
| 0.4781 | 986.0 | 1972 | 1.1982 |
| 0.4781 | 987.0 | 1974 | 1.1993 |
| 0.4781 | 988.0 | 1976 | 1.2106 |
| 0.4781 | 989.0 | 1978 | 1.2199 |
| 0.4781 | 990.0 | 1980 | 1.2036 |
| 0.4781 | 991.0 | 1982 | 1.1897 |
| 0.4781 | 992.0 | 1984 | 1.1758 |
| 0.4781 | 993.0 | 1986 | 1.1654 |
| 0.4781 | 994.0 | 1988 | 1.1530 |
| 0.4781 | 995.0 | 1990 | 1.1440 |
| 0.4781 | 996.0 | 1992 | 1.1348 |
| 0.4781 | 997.0 | 1994 | 1.1304 |
| 0.4781 | 998.0 | 1996 | 1.1295 |
| 0.4781 | 999.0 | 1998 | 1.1265 |
| 0.4785 | 1000.0 | 2000 | 1.1260 |
| 0.4785 | 1001.0 | 2002 | 1.1296 |
| 0.4785 | 1002.0 | 2004 | 1.1355 |
| 0.4785 | 1003.0 | 2006 | 1.1381 |
| 0.4785 | 1004.0 | 2008 | 1.1415 |
| 0.4785 | 1005.0 | 2010 | 1.1478 |
| 0.4785 | 1006.0 | 2012 | 1.1545 |
| 0.4785 | 1007.0 | 2014 | 1.1680 |
| 0.4785 | 1008.0 | 2016 | 1.1800 |
| 0.4785 | 1009.0 | 2018 | 1.1872 |
| 0.4785 | 1010.0 | 2020 | 1.1931 |
| 0.4785 | 1011.0 | 2022 | 1.1985 |
| 0.4785 | 1012.0 | 2024 | 1.2033 |
| 0.4785 | 1013.0 | 2026 | 1.2063 |
| 0.4785 | 1014.0 | 2028 | 1.2042 |
| 0.4785 | 1015.0 | 2030 | 1.2017 |
| 0.4785 | 1016.0 | 2032 | 1.2037 |
| 0.4785 | 1017.0 | 2034 | 1.2036 |
| 0.4785 | 1018.0 | 2036 | 1.2058 |
| 0.4785 | 1019.0 | 2038 | 1.2071 |
| 0.4785 | 1020.0 | 2040 | 1.2059 |
| 0.4785 | 1021.0 | 2042 | 1.1991 |
| 0.4785 | 1022.0 | 2044 | 1.1959 |
| 0.4785 | 1023.0 | 2046 | 1.1912 |
| 0.4785 | 1024.0 | 2048 | 1.1871 |
| 0.4785 | 1025.0 | 2050 | 1.2293 |
| 0.4785 | 1026.0 | 2052 | 1.2629 |
| 0.4785 | 1027.0 | 2054 | 1.2895 |
| 0.4785 | 1028.0 | 2056 | 1.3061 |
| 0.4785 | 1029.0 | 2058 | 1.3118 |
| 0.4785 | 1030.0 | 2060 | 1.3028 |
| 0.4785 | 1031.0 | 2062 | 1.2915 |
| 0.4785 | 1032.0 | 2064 | 1.2800 |
| 0.4785 | 1033.0 | 2066 | 1.2651 |
| 0.4785 | 1034.0 | 2068 | 1.2486 |
| 0.4785 | 1035.0 | 2070 | 1.2239 |
| 0.4785 | 1036.0 | 2072 | 1.2043 |
| 0.4785 | 1037.0 | 2074 | 1.1911 |
| 0.4785 | 1038.0 | 2076 | 1.1826 |
| 0.4785 | 1039.0 | 2078 | 1.1786 |
| 0.4785 | 1040.0 | 2080 | 1.1763 |
| 0.4785 | 1041.0 | 2082 | 1.1767 |
| 0.4785 | 1042.0 | 2084 | 1.1775 |
| 0.4785 | 1043.0 | 2086 | 1.1826 |
| 0.4785 | 1044.0 | 2088 | 1.1873 |
| 0.4785 | 1045.0 | 2090 | 1.1916 |
| 0.4785 | 1046.0 | 2092 | 1.2039 |
| 0.4785 | 1047.0 | 2094 | 1.2167 |
| 0.4785 | 1048.0 | 2096 | 1.2258 |
| 0.4785 | 1049.0 | 2098 | 1.2322 |
| 0.4785 | 1050.0 | 2100 | 1.2385 |
| 0.4785 | 1051.0 | 2102 | 1.2439 |
| 0.4785 | 1052.0 | 2104 | 1.2470 |
| 0.4785 | 1053.0 | 2106 | 1.2492 |
| 0.4785 | 1054.0 | 2108 | 1.2515 |
| 0.4785 | 1055.0 | 2110 | 1.2519 |
| 0.4785 | 1056.0 | 2112 | 1.2516 |
| 0.4785 | 1057.0 | 2114 | 1.2512 |
| 0.4785 | 1058.0 | 2116 | 1.2502 |
| 0.4785 | 1059.0 | 2118 | 1.2485 |
| 0.4785 | 1060.0 | 2120 | 1.2457 |
| 0.4785 | 1061.0 | 2122 | 1.2373 |
| 0.4785 | 1062.0 | 2124 | 1.2280 |
| 0.4785 | 1063.0 | 2126 | 1.2303 |
| 0.4785 | 1064.0 | 2128 | 1.2325 |
| 0.4785 | 1065.0 | 2130 | 1.2314 |
| 0.4785 | 1066.0 | 2132 | 1.2312 |
| 0.4785 | 1067.0 | 2134 | 1.2283 |
| 0.4785 | 1068.0 | 2136 | 1.2274 |
| 0.4785 | 1069.0 | 2138 | 1.2313 |
| 0.4785 | 1070.0 | 2140 | 1.2404 |
| 0.4785 | 1071.0 | 2142 | 1.2493 |
| 0.4785 | 1072.0 | 2144 | 1.2654 |
| 0.4785 | 1073.0 | 2146 | 1.2767 |
| 0.4785 | 1074.0 | 2148 | 1.2787 |
| 0.4785 | 1075.0 | 2150 | 1.2802 |
| 0.4785 | 1076.0 | 2152 | 1.2777 |
| 0.4785 | 1077.0 | 2154 | 1.2750 |
| 0.4785 | 1078.0 | 2156 | 1.2743 |
| 0.4785 | 1079.0 | 2158 | 1.2729 |
| 0.4785 | 1080.0 | 2160 | 1.2711 |
| 0.4785 | 1081.0 | 2162 | 1.2691 |
| 0.4785 | 1082.0 | 2164 | 1.2672 |
| 0.4785 | 1083.0 | 2166 | 1.2659 |
| 0.4785 | 1084.0 | 2168 | 1.2641 |
| 0.4785 | 1085.0 | 2170 | 1.2618 |
| 0.4785 | 1086.0 | 2172 | 1.2580 |
| 0.4785 | 1087.0 | 2174 | 1.2571 |
| 0.4785 | 1088.0 | 2176 | 1.2552 |
| 0.4785 | 1089.0 | 2178 | 1.2529 |
| 0.4785 | 1090.0 | 2180 | 1.2513 |
| 0.4785 | 1091.0 | 2182 | 1.2493 |
| 0.4785 | 1092.0 | 2184 | 1.2449 |
| 0.4785 | 1093.0 | 2186 | 1.2375 |
| 0.4785 | 1094.0 | 2188 | 1.2298 |
| 0.4785 | 1095.0 | 2190 | 1.2240 |
| 0.4785 | 1096.0 | 2192 | 1.2166 |
| 0.4785 | 1097.0 | 2194 | 1.2073 |
| 0.4785 | 1098.0 | 2196 | 1.2012 |
| 0.4785 | 1099.0 | 2198 | 1.1973 |
| 0.4785 | 1100.0 | 2200 | 1.1885 |
| 0.4785 | 1101.0 | 2202 | 1.1834 |
| 0.4785 | 1102.0 | 2204 | 1.1892 |
| 0.4785 | 1103.0 | 2206 | 1.1970 |
| 0.4785 | 1104.0 | 2208 | 1.1964 |
| 0.4785 | 1105.0 | 2210 | 1.1834 |
| 0.4785 | 1106.0 | 2212 | 1.1803 |
| 0.4785 | 1107.0 | 2214 | 1.1775 |
| 0.4785 | 1108.0 | 2216 | 1.1648 |
| 0.4785 | 1109.0 | 2218 | 1.1581 |
| 0.4785 | 1110.0 | 2220 | 1.1575 |
| 0.4785 | 1111.0 | 2222 | 1.1600 |
| 0.4785 | 1112.0 | 2224 | 1.1493 |
| 0.4785 | 1113.0 | 2226 | 1.1433 |
| 0.4785 | 1114.0 | 2228 | 1.1488 |
| 0.4785 | 1115.0 | 2230 | 1.1570 |
| 0.4785 | 1116.0 | 2232 | 1.1730 |
| 0.4785 | 1117.0 | 2234 | 1.1817 |
| 0.4785 | 1118.0 | 2236 | 1.1934 |
| 0.4785 | 1119.0 | 2238 | 1.2062 |
| 0.4785 | 1120.0 | 2240 | 1.2129 |
| 0.4785 | 1121.0 | 2242 | 1.2208 |
| 0.4785 | 1122.0 | 2244 | 1.2289 |
| 0.4785 | 1123.0 | 2246 | 1.2361 |
| 0.4785 | 1124.0 | 2248 | 1.2382 |
| 0.4785 | 1125.0 | 2250 | 1.2393 |
| 0.4785 | 1126.0 | 2252 | 1.2392 |
| 0.4785 | 1127.0 | 2254 | 1.2390 |
| 0.4785 | 1128.0 | 2256 | 1.2392 |
| 0.4785 | 1129.0 | 2258 | 1.2394 |
| 0.4785 | 1130.0 | 2260 | 1.2401 |
| 0.4785 | 1131.0 | 2262 | 1.2423 |
| 0.4785 | 1132.0 | 2264 | 1.2444 |
| 0.4785 | 1133.0 | 2266 | 1.2469 |
| 0.4785 | 1134.0 | 2268 | 1.2499 |
| 0.4785 | 1135.0 | 2270 | 1.2499 |
| 0.4785 | 1136.0 | 2272 | 1.2474 |
| 0.4785 | 1137.0 | 2274 | 1.2358 |
| 0.4785 | 1138.0 | 2276 | 1.2051 |
| 0.4785 | 1139.0 | 2278 | 1.1686 |
| 0.4785 | 1140.0 | 2280 | 1.1572 |
| 0.4785 | 1141.0 | 2282 | 1.1571 |
| 0.4785 | 1142.0 | 2284 | 1.1563 |
| 0.4785 | 1143.0 | 2286 | 1.1557 |
| 0.4785 | 1144.0 | 2288 | 1.1525 |
| 0.4785 | 1145.0 | 2290 | 1.1454 |
| 0.4785 | 1146.0 | 2292 | 1.1454 |
| 0.4785 | 1147.0 | 2294 | 1.1520 |
| 0.4785 | 1148.0 | 2296 | 1.1847 |
| 0.4785 | 1149.0 | 2298 | 1.2197 |
| 0.4785 | 1150.0 | 2300 | 1.2432 |
| 0.4785 | 1151.0 | 2302 | 1.2558 |
| 0.4785 | 1152.0 | 2304 | 1.2646 |
| 0.4785 | 1153.0 | 2306 | 1.2735 |
| 0.4785 | 1154.0 | 2308 | 1.2799 |
| 0.4785 | 1155.0 | 2310 | 1.2850 |
| 0.4785 | 1156.0 | 2312 | 1.2861 |
| 0.4785 | 1157.0 | 2314 | 1.2867 |
| 0.4785 | 1158.0 | 2316 | 1.2868 |
| 0.4785 | 1159.0 | 2318 | 1.2854 |
| 0.4785 | 1160.0 | 2320 | 1.2828 |
| 0.4785 | 1161.0 | 2322 | 1.2797 |
| 0.4785 | 1162.0 | 2324 | 1.2766 |
| 0.4785 | 1163.0 | 2326 | 1.2729 |
| 0.4785 | 1164.0 | 2328 | 1.2721 |
| 0.4785 | 1165.0 | 2330 | 1.2740 |
| 0.4785 | 1166.0 | 2332 | 1.2761 |
| 0.4785 | 1167.0 | 2334 | 1.2774 |
| 0.4785 | 1168.0 | 2336 | 1.2775 |
| 0.4785 | 1169.0 | 2338 | 1.2774 |
| 0.4785 | 1170.0 | 2340 | 1.2765 |
| 0.4785 | 1171.0 | 2342 | 1.2750 |
| 0.4785 | 1172.0 | 2344 | 1.2736 |
| 0.4785 | 1173.0 | 2346 | 1.2713 |
| 0.4785 | 1174.0 | 2348 | 1.2706 |
| 0.4785 | 1175.0 | 2350 | 1.2726 |
| 0.4785 | 1176.0 | 2352 | 1.2741 |
| 0.4785 | 1177.0 | 2354 | 1.2749 |
| 0.4785 | 1178.0 | 2356 | 1.2766 |
| 0.4785 | 1179.0 | 2358 | 1.2762 |
| 0.4785 | 1180.0 | 2360 | 1.2757 |
| 0.4785 | 1181.0 | 2362 | 1.2758 |
| 0.4785 | 1182.0 | 2364 | 1.2767 |
| 0.4785 | 1183.0 | 2366 | 1.2794 |
| 0.4785 | 1184.0 | 2368 | 1.2813 |
| 0.4785 | 1185.0 | 2370 | 1.2814 |
| 0.4785 | 1186.0 | 2372 | 1.2815 |
| 0.4785 | 1187.0 | 2374 | 1.2820 |
| 0.4785 | 1188.0 | 2376 | 1.2825 |
| 0.4785 | 1189.0 | 2378 | 1.2819 |
| 0.4785 | 1190.0 | 2380 | 1.2810 |
| 0.4785 | 1191.0 | 2382 | 1.2795 |
| 0.4785 | 1192.0 | 2384 | 1.2786 |
| 0.4785 | 1193.0 | 2386 | 1.2764 |
| 0.4785 | 1194.0 | 2388 | 1.2746 |
| 0.4785 | 1195.0 | 2390 | 1.2724 |
| 0.4785 | 1196.0 | 2392 | 1.2698 |
| 0.4785 | 1197.0 | 2394 | 1.2675 |
| 0.4785 | 1198.0 | 2396 | 1.2663 |
| 0.4785 | 1199.0 | 2398 | 1.2650 |
| 0.4785 | 1200.0 | 2400 | 1.2633 |
| 0.4785 | 1201.0 | 2402 | 1.2609 |
| 0.4785 | 1202.0 | 2404 | 1.2592 |
| 0.4785 | 1203.0 | 2406 | 1.2554 |
| 0.4785 | 1204.0 | 2408 | 1.2543 |
| 0.4785 | 1205.0 | 2410 | 1.2521 |
| 0.4785 | 1206.0 | 2412 | 1.2567 |
| 0.4785 | 1207.0 | 2414 | 1.2675 |
| 0.4785 | 1208.0 | 2416 | 1.2765 |
| 0.4785 | 1209.0 | 2418 | 1.2839 |
| 0.4785 | 1210.0 | 2420 | 1.2909 |
| 0.4785 | 1211.0 | 2422 | 1.2958 |
| 0.4785 | 1212.0 | 2424 | 1.2982 |
| 0.4785 | 1213.0 | 2426 | 1.2990 |
| 0.4785 | 1214.0 | 2428 | 1.3002 |
| 0.4785 | 1215.0 | 2430 | 1.2989 |
| 0.4785 | 1216.0 | 2432 | 1.2956 |
| 0.4785 | 1217.0 | 2434 | 1.2921 |
| 0.4785 | 1218.0 | 2436 | 1.2890 |
| 0.4785 | 1219.0 | 2438 | 1.2800 |
| 0.4785 | 1220.0 | 2440 | 1.2706 |
| 0.4785 | 1221.0 | 2442 | 1.2624 |
| 0.4785 | 1222.0 | 2444 | 1.2545 |
| 0.4785 | 1223.0 | 2446 | 1.2452 |
| 0.4785 | 1224.0 | 2448 | 1.2374 |
| 0.4785 | 1225.0 | 2450 | 1.2324 |
| 0.4785 | 1226.0 | 2452 | 1.2325 |
| 0.4785 | 1227.0 | 2454 | 1.2294 |
| 0.4785 | 1228.0 | 2456 | 1.2235 |
| 0.4785 | 1229.0 | 2458 | 1.2164 |
| 0.4785 | 1230.0 | 2460 | 1.2084 |
| 0.4785 | 1231.0 | 2462 | 1.2084 |
| 0.4785 | 1232.0 | 2464 | 1.2079 |
| 0.4785 | 1233.0 | 2466 | 1.2103 |
| 0.4785 | 1234.0 | 2468 | 1.2140 |
| 0.4785 | 1235.0 | 2470 | 1.2188 |
| 0.4785 | 1236.0 | 2472 | 1.2234 |
| 0.4785 | 1237.0 | 2474 | 1.2299 |
| 0.4785 | 1238.0 | 2476 | 1.2364 |
| 0.4785 | 1239.0 | 2478 | 1.2413 |
| 0.4785 | 1240.0 | 2480 | 1.2446 |
| 0.4785 | 1241.0 | 2482 | 1.2477 |
| 0.4785 | 1242.0 | 2484 | 1.2517 |
| 0.4785 | 1243.0 | 2486 | 1.2548 |
| 0.4785 | 1244.0 | 2488 | 1.2565 |
| 0.4785 | 1245.0 | 2490 | 1.2581 |
| 0.4785 | 1246.0 | 2492 | 1.2598 |
| 0.4785 | 1247.0 | 2494 | 1.2605 |
| 0.4785 | 1248.0 | 2496 | 1.2630 |
| 0.4785 | 1249.0 | 2498 | 1.2512 |
| 0.4776 | 1250.0 | 2500 | 1.2253 |
| 0.4776 | 1251.0 | 2502 | 1.2045 |
| 0.4776 | 1252.0 | 2504 | 1.1972 |
| 0.4776 | 1253.0 | 2506 | 1.1981 |
| 0.4776 | 1254.0 | 2508 | 1.1989 |
| 0.4776 | 1255.0 | 2510 | 1.1981 |
| 0.4776 | 1256.0 | 2512 | 1.1980 |
| 0.4776 | 1257.0 | 2514 | 1.1981 |
| 0.4776 | 1258.0 | 2516 | 1.1932 |
| 0.4776 | 1259.0 | 2518 | 1.1888 |
| 0.4776 | 1260.0 | 2520 | 1.1837 |
| 0.4776 | 1261.0 | 2522 | 1.1776 |
| 0.4776 | 1262.0 | 2524 | 1.1761 |
| 0.4776 | 1263.0 | 2526 | 1.1791 |
| 0.4776 | 1264.0 | 2528 | 1.1889 |
| 0.4776 | 1265.0 | 2530 | 1.1988 |
| 0.4776 | 1266.0 | 2532 | 1.2035 |
| 0.4776 | 1267.0 | 2534 | 1.2069 |
| 0.4776 | 1268.0 | 2536 | 1.2046 |
| 0.4776 | 1269.0 | 2538 | 1.2051 |
| 0.4776 | 1270.0 | 2540 | 1.2029 |
| 0.4776 | 1271.0 | 2542 | 1.2000 |
| 0.4776 | 1272.0 | 2544 | 1.1959 |
| 0.4776 | 1273.0 | 2546 | 1.1967 |
| 0.4776 | 1274.0 | 2548 | 1.1910 |
| 0.4776 | 1275.0 | 2550 | 1.1881 |
| 0.4776 | 1276.0 | 2552 | 1.1774 |
| 0.4776 | 1277.0 | 2554 | 1.1647 |
| 0.4776 | 1278.0 | 2556 | 1.1587 |
| 0.4776 | 1279.0 | 2558 | 1.1595 |
| 0.4776 | 1280.0 | 2560 | 1.1641 |
| 0.4776 | 1281.0 | 2562 | 1.1694 |
| 0.4776 | 1282.0 | 2564 | 1.1757 |
| 0.4776 | 1283.0 | 2566 | 1.1906 |
| 0.4776 | 1284.0 | 2568 | 1.2120 |
| 0.4776 | 1285.0 | 2570 | 1.2318 |
| 0.4776 | 1286.0 | 2572 | 1.2443 |
| 0.4776 | 1287.0 | 2574 | 1.2540 |
| 0.4776 | 1288.0 | 2576 | 1.2575 |
| 0.4776 | 1289.0 | 2578 | 1.2605 |
| 0.4776 | 1290.0 | 2580 | 1.2632 |
| 0.4776 | 1291.0 | 2582 | 1.2656 |
| 0.4776 | 1292.0 | 2584 | 1.2646 |
| 0.4776 | 1293.0 | 2586 | 1.2643 |
| 0.4776 | 1294.0 | 2588 | 1.2640 |
| 0.4776 | 1295.0 | 2590 | 1.2644 |
| 0.4776 | 1296.0 | 2592 | 1.2658 |
| 0.4776 | 1297.0 | 2594 | 1.2665 |
| 0.4776 | 1298.0 | 2596 | 1.2665 |
| 0.4776 | 1299.0 | 2598 | 1.2661 |
| 0.4776 | 1300.0 | 2600 | 1.2653 |
| 0.4776 | 1301.0 | 2602 | 1.2651 |
| 0.4776 | 1302.0 | 2604 | 1.2652 |
| 0.4776 | 1303.0 | 2606 | 1.2654 |
| 0.4776 | 1304.0 | 2608 | 1.2643 |
| 0.4776 | 1305.0 | 2610 | 1.2633 |
| 0.4776 | 1306.0 | 2612 | 1.2608 |
| 0.4776 | 1307.0 | 2614 | 1.2589 |
| 0.4776 | 1308.0 | 2616 | 1.2581 |
| 0.4776 | 1309.0 | 2618 | 1.2578 |
| 0.4776 | 1310.0 | 2620 | 1.2574 |
| 0.4776 | 1311.0 | 2622 | 1.2556 |
| 0.4776 | 1312.0 | 2624 | 1.2535 |
| 0.4776 | 1313.0 | 2626 | 1.2511 |
| 0.4776 | 1314.0 | 2628 | 1.2496 |
| 0.4776 | 1315.0 | 2630 | 1.2490 |
| 0.4776 | 1316.0 | 2632 | 1.2498 |
| 0.4776 | 1317.0 | 2634 | 1.2512 |
| 0.4776 | 1318.0 | 2636 | 1.2514 |
| 0.4776 | 1319.0 | 2638 | 1.2508 |
| 0.4776 | 1320.0 | 2640 | 1.2501 |
| 0.4776 | 1321.0 | 2642 | 1.2479 |
| 0.4776 | 1322.0 | 2644 | 1.2458 |
| 0.4776 | 1323.0 | 2646 | 1.2436 |
| 0.4776 | 1324.0 | 2648 | 1.2426 |
| 0.4776 | 1325.0 | 2650 | 1.2445 |
| 0.4776 | 1326.0 | 2652 | 1.2458 |
| 0.4776 | 1327.0 | 2654 | 1.2430 |
| 0.4776 | 1328.0 | 2656 | 1.2369 |
| 0.4776 | 1329.0 | 2658 | 1.2298 |
| 0.4776 | 1330.0 | 2660 | 1.2232 |
| 0.4776 | 1331.0 | 2662 | 1.2157 |
| 0.4776 | 1332.0 | 2664 | 1.2062 |
| 0.4776 | 1333.0 | 2666 | 1.1999 |
| 0.4776 | 1334.0 | 2668 | 1.1974 |
| 0.4776 | 1335.0 | 2670 | 1.1988 |
| 0.4776 | 1336.0 | 2672 | 1.2024 |
| 0.4776 | 1337.0 | 2674 | 1.2041 |
| 0.4776 | 1338.0 | 2676 | 1.2073 |
| 0.4776 | 1339.0 | 2678 | 1.2076 |
| 0.4776 | 1340.0 | 2680 | 1.2092 |
| 0.4776 | 1341.0 | 2682 | 1.2159 |
| 0.4776 | 1342.0 | 2684 | 1.2192 |
| 0.4776 | 1343.0 | 2686 | 1.2202 |
| 0.4776 | 1344.0 | 2688 | 1.2206 |
| 0.4776 | 1345.0 | 2690 | 1.2229 |
| 0.4776 | 1346.0 | 2692 | 1.2252 |
| 0.4776 | 1347.0 | 2694 | 1.2247 |
| 0.4776 | 1348.0 | 2696 | 1.2225 |
| 0.4776 | 1349.0 | 2698 | 1.2210 |
| 0.4776 | 1350.0 | 2700 | 1.2181 |
| 0.4776 | 1351.0 | 2702 | 1.2134 |
| 0.4776 | 1352.0 | 2704 | 1.2085 |
| 0.4776 | 1353.0 | 2706 | 1.2028 |
| 0.4776 | 1354.0 | 2708 | 1.2030 |
| 0.4776 | 1355.0 | 2710 | 1.2042 |
| 0.4776 | 1356.0 | 2712 | 1.2050 |
| 0.4776 | 1357.0 | 2714 | 1.2049 |
| 0.4776 | 1358.0 | 2716 | 1.2060 |
| 0.4776 | 1359.0 | 2718 | 1.2039 |
| 0.4776 | 1360.0 | 2720 | 1.2047 |
| 0.4776 | 1361.0 | 2722 | 1.2044 |
| 0.4776 | 1362.0 | 2724 | 1.2072 |
| 0.4776 | 1363.0 | 2726 | 1.2099 |
| 0.4776 | 1364.0 | 2728 | 1.2099 |
| 0.4776 | 1365.0 | 2730 | 1.2082 |
| 0.4776 | 1366.0 | 2732 | 1.2083 |
| 0.4776 | 1367.0 | 2734 | 1.2115 |
| 0.4776 | 1368.0 | 2736 | 1.2154 |
| 0.4776 | 1369.0 | 2738 | 1.2166 |
| 0.4776 | 1370.0 | 2740 | 1.2202 |
| 0.4776 | 1371.0 | 2742 | 1.2248 |
| 0.4776 | 1372.0 | 2744 | 1.2285 |
| 0.4776 | 1373.0 | 2746 | 1.2331 |
| 0.4776 | 1374.0 | 2748 | 1.2364 |
| 0.4776 | 1375.0 | 2750 | 1.2392 |
| 0.4776 | 1376.0 | 2752 | 1.2434 |
| 0.4776 | 1377.0 | 2754 | 1.2468 |
| 0.4776 | 1378.0 | 2756 | 1.2489 |
| 0.4776 | 1379.0 | 2758 | 1.2504 |
| 0.4776 | 1380.0 | 2760 | 1.2527 |
| 0.4776 | 1381.0 | 2762 | 1.2539 |
| 0.4776 | 1382.0 | 2764 | 1.2628 |
| 0.4776 | 1383.0 | 2766 | 1.2715 |
| 0.4776 | 1384.0 | 2768 | 1.2808 |
| 0.4776 | 1385.0 | 2770 | 1.2908 |
| 0.4776 | 1386.0 | 2772 | 1.2992 |
| 0.4776 | 1387.0 | 2774 | 1.3029 |
| 0.4776 | 1388.0 | 2776 | 1.3042 |
| 0.4776 | 1389.0 | 2778 | 1.3057 |
| 0.4776 | 1390.0 | 2780 | 1.3049 |
| 0.4776 | 1391.0 | 2782 | 1.2997 |
| 0.4776 | 1392.0 | 2784 | 1.2924 |
| 0.4776 | 1393.0 | 2786 | 1.2861 |
| 0.4776 | 1394.0 | 2788 | 1.2802 |
| 0.4776 | 1395.0 | 2790 | 1.2728 |
| 0.4776 | 1396.0 | 2792 | 1.2655 |
| 0.4776 | 1397.0 | 2794 | 1.2595 |
| 0.4776 | 1398.0 | 2796 | 1.2542 |
| 0.4776 | 1399.0 | 2798 | 1.2492 |
| 0.4776 | 1400.0 | 2800 | 1.2457 |
| 0.4776 | 1401.0 | 2802 | 1.2397 |
| 0.4776 | 1402.0 | 2804 | 1.2292 |
| 0.4776 | 1403.0 | 2806 | 1.2216 |
| 0.4776 | 1404.0 | 2808 | 1.2159 |
| 0.4776 | 1405.0 | 2810 | 1.2126 |
| 0.4776 | 1406.0 | 2812 | 1.2123 |
| 0.4776 | 1407.0 | 2814 | 1.2132 |
| 0.4776 | 1408.0 | 2816 | 1.2163 |
| 0.4776 | 1409.0 | 2818 | 1.2231 |
| 0.4776 | 1410.0 | 2820 | 1.2286 |
| 0.4776 | 1411.0 | 2822 | 1.2326 |
| 0.4776 | 1412.0 | 2824 | 1.2418 |
| 0.4776 | 1413.0 | 2826 | 1.2497 |
| 0.4776 | 1414.0 | 2828 | 1.2551 |
| 0.4776 | 1415.0 | 2830 | 1.2587 |
| 0.4776 | 1416.0 | 2832 | 1.2609 |
| 0.4776 | 1417.0 | 2834 | 1.2656 |
| 0.4776 | 1418.0 | 2836 | 1.2764 |
| 0.4776 | 1419.0 | 2838 | 1.2883 |
| 0.4776 | 1420.0 | 2840 | 1.2941 |
| 0.4776 | 1421.0 | 2842 | 1.2972 |
| 0.4776 | 1422.0 | 2844 | 1.3007 |
| 0.4776 | 1423.0 | 2846 | 1.3036 |
| 0.4776 | 1424.0 | 2848 | 1.3040 |
| 0.4776 | 1425.0 | 2850 | 1.3047 |
| 0.4776 | 1426.0 | 2852 | 1.3029 |
| 0.4776 | 1427.0 | 2854 | 1.2976 |
| 0.4776 | 1428.0 | 2856 | 1.2914 |
| 0.4776 | 1429.0 | 2858 | 1.2850 |
| 0.4776 | 1430.0 | 2860 | 1.2778 |
| 0.4776 | 1431.0 | 2862 | 1.2711 |
| 0.4776 | 1432.0 | 2864 | 1.2642 |
| 0.4776 | 1433.0 | 2866 | 1.2580 |
| 0.4776 | 1434.0 | 2868 | 1.2524 |
| 0.4776 | 1435.0 | 2870 | 1.2447 |
| 0.4776 | 1436.0 | 2872 | 1.2385 |
| 0.4776 | 1437.0 | 2874 | 1.2336 |
| 0.4776 | 1438.0 | 2876 | 1.2328 |
| 0.4776 | 1439.0 | 2878 | 1.2337 |
| 0.4776 | 1440.0 | 2880 | 1.2323 |
| 0.4776 | 1441.0 | 2882 | 1.2337 |
| 0.4776 | 1442.0 | 2884 | 1.2350 |
| 0.4776 | 1443.0 | 2886 | 1.2350 |
| 0.4776 | 1444.0 | 2888 | 1.2351 |
| 0.4776 | 1445.0 | 2890 | 1.2357 |
| 0.4776 | 1446.0 | 2892 | 1.2363 |
| 0.4776 | 1447.0 | 2894 | 1.2367 |
| 0.4776 | 1448.0 | 2896 | 1.2366 |
| 0.4776 | 1449.0 | 2898 | 1.2385 |
| 0.4776 | 1450.0 | 2900 | 1.2385 |
| 0.4776 | 1451.0 | 2902 | 1.2400 |
| 0.4776 | 1452.0 | 2904 | 1.2404 |
| 0.4776 | 1453.0 | 2906 | 1.2419 |
| 0.4776 | 1454.0 | 2908 | 1.2431 |
| 0.4776 | 1455.0 | 2910 | 1.2477 |
| 0.4776 | 1456.0 | 2912 | 1.2485 |
| 0.4776 | 1457.0 | 2914 | 1.2568 |
| 0.4776 | 1458.0 | 2916 | 1.2655 |
| 0.4776 | 1459.0 | 2918 | 1.2744 |
| 0.4776 | 1460.0 | 2920 | 1.2770 |
| 0.4776 | 1461.0 | 2922 | 1.2726 |
| 0.4776 | 1462.0 | 2924 | 1.2615 |
| 0.4776 | 1463.0 | 2926 | 1.2530 |
| 0.4776 | 1464.0 | 2928 | 1.2452 |
| 0.4776 | 1465.0 | 2930 | 1.2369 |
| 0.4776 | 1466.0 | 2932 | 1.2314 |
| 0.4776 | 1467.0 | 2934 | 1.2226 |
| 0.4776 | 1468.0 | 2936 | 1.2295 |
| 0.4776 | 1469.0 | 2938 | 1.2430 |
| 0.4776 | 1470.0 | 2940 | 1.2523 |
| 0.4776 | 1471.0 | 2942 | 1.2602 |
| 0.4776 | 1472.0 | 2944 | 1.2596 |
| 0.4776 | 1473.0 | 2946 | 1.2754 |
| 0.4776 | 1474.0 | 2948 | 1.3028 |
| 0.4776 | 1475.0 | 2950 | 1.1662 |
| 0.4776 | 1476.0 | 2952 | 1.1023 |
| 0.4776 | 1477.0 | 2954 | 1.1358 |
| 0.4776 | 1478.0 | 2956 | 1.2325 |
| 0.4776 | 1479.0 | 2958 | 1.3017 |
| 0.4776 | 1480.0 | 2960 | 1.3281 |
| 0.4776 | 1481.0 | 2962 | 1.3232 |
| 0.4776 | 1482.0 | 2964 | 1.3235 |
| 0.4776 | 1483.0 | 2966 | 1.3584 |
| 0.4776 | 1484.0 | 2968 | 1.3943 |
| 0.4776 | 1485.0 | 2970 | 1.4020 |
| 0.4776 | 1486.0 | 2972 | 1.3987 |
| 0.4776 | 1487.0 | 2974 | 1.3949 |
| 0.4776 | 1488.0 | 2976 | 1.3819 |
| 0.4776 | 1489.0 | 2978 | 1.3643 |
| 0.4776 | 1490.0 | 2980 | 1.3430 |
| 0.4776 | 1491.0 | 2982 | 1.3178 |
| 0.4776 | 1492.0 | 2984 | 1.2924 |
| 0.4776 | 1493.0 | 2986 | 1.2658 |
| 0.4776 | 1494.0 | 2988 | 1.2485 |
| 0.4776 | 1495.0 | 2990 | 1.2315 |
| 0.4776 | 1496.0 | 2992 | 1.2149 |
| 0.4776 | 1497.0 | 2994 | 1.1984 |
| 0.4776 | 1498.0 | 2996 | 1.1837 |
| 0.4776 | 1499.0 | 2998 | 1.1755 |
| 0.4293 | 1500.0 | 3000 | 1.1671 |
| 0.4293 | 1501.0 | 3002 | 1.1605 |
| 0.4293 | 1502.0 | 3004 | 1.1546 |
| 0.4293 | 1503.0 | 3006 | 1.1614 |
| 0.4293 | 1504.0 | 3008 | 1.1678 |
| 0.4293 | 1505.0 | 3010 | 1.1733 |
| 0.4293 | 1506.0 | 3012 | 1.1768 |
| 0.4293 | 1507.0 | 3014 | 1.1782 |
| 0.4293 | 1508.0 | 3016 | 1.1803 |
| 0.4293 | 1509.0 | 3018 | 1.1814 |
| 0.4293 | 1510.0 | 3020 | 1.1868 |
| 0.4293 | 1511.0 | 3022 | 1.1985 |
| 0.4293 | 1512.0 | 3024 | 1.2086 |
| 0.4293 | 1513.0 | 3026 | 1.2192 |
| 0.4293 | 1514.0 | 3028 | 1.2179 |
| 0.4293 | 1515.0 | 3030 | 1.2257 |
| 0.4293 | 1516.0 | 3032 | 1.2354 |
| 0.4293 | 1517.0 | 3034 | 1.2448 |
| 0.4293 | 1518.0 | 3036 | 1.2579 |
| 0.4293 | 1519.0 | 3038 | 1.2649 |
| 0.4293 | 1520.0 | 3040 | 1.2681 |
| 0.4293 | 1521.0 | 3042 | 1.2677 |
| 0.4293 | 1522.0 | 3044 | 1.2616 |
| 0.4293 | 1523.0 | 3046 | 1.2519 |
| 0.4293 | 1524.0 | 3048 | 1.2442 |
| 0.4293 | 1525.0 | 3050 | 1.2377 |
| 0.4293 | 1526.0 | 3052 | 1.2318 |
| 0.4293 | 1527.0 | 3054 | 1.2254 |
| 0.4293 | 1528.0 | 3056 | 1.2198 |
| 0.4293 | 1529.0 | 3058 | 1.2149 |
| 0.4293 | 1530.0 | 3060 | 1.2111 |
| 0.4293 | 1531.0 | 3062 | 1.2077 |
| 0.4293 | 1532.0 | 3064 | 1.2047 |
| 0.4293 | 1533.0 | 3066 | 1.2044 |
| 0.4293 | 1534.0 | 3068 | 1.2046 |
| 0.4293 | 1535.0 | 3070 | 1.2043 |
| 0.4293 | 1536.0 | 3072 | 1.2045 |
| 0.4293 | 1537.0 | 3074 | 1.2060 |
| 0.4293 | 1538.0 | 3076 | 1.2080 |
| 0.4293 | 1539.0 | 3078 | 1.2094 |
| 0.4293 | 1540.0 | 3080 | 1.2106 |
| 0.4293 | 1541.0 | 3082 | 1.2118 |
| 0.4293 | 1542.0 | 3084 | 1.2129 |
| 0.4293 | 1543.0 | 3086 | 1.2140 |
| 0.4293 | 1544.0 | 3088 | 1.2148 |
| 0.4293 | 1545.0 | 3090 | 1.2151 |
| 0.4293 | 1546.0 | 3092 | 1.2161 |
| 0.4293 | 1547.0 | 3094 | 1.2172 |
| 0.4293 | 1548.0 | 3096 | 1.2184 |
| 0.4293 | 1549.0 | 3098 | 1.2195 |
| 0.4293 | 1550.0 | 3100 | 1.2199 |
| 0.4293 | 1551.0 | 3102 | 1.2188 |
| 0.4293 | 1552.0 | 3104 | 1.2166 |
| 0.4293 | 1553.0 | 3106 | 1.2167 |
| 0.4293 | 1554.0 | 3108 | 1.2170 |
| 0.4293 | 1555.0 | 3110 | 1.2161 |
| 0.4293 | 1556.0 | 3112 | 1.2156 |
| 0.4293 | 1557.0 | 3114 | 1.2171 |
| 0.4293 | 1558.0 | 3116 | 1.2183 |
| 0.4293 | 1559.0 | 3118 | 1.2175 |
| 0.4293 | 1560.0 | 3120 | 1.2176 |
| 0.4293 | 1561.0 | 3122 | 1.2195 |
| 0.4293 | 1562.0 | 3124 | 1.2167 |
| 0.4293 | 1563.0 | 3126 | 1.2125 |
| 0.4293 | 1564.0 | 3128 | 1.2086 |
| 0.4293 | 1565.0 | 3130 | 1.2105 |
| 0.4293 | 1566.0 | 3132 | 1.2115 |
| 0.4293 | 1567.0 | 3134 | 1.2120 |
| 0.4293 | 1568.0 | 3136 | 1.2121 |
| 0.4293 | 1569.0 | 3138 | 1.2127 |
| 0.4293 | 1570.0 | 3140 | 1.2131 |
| 0.4293 | 1571.0 | 3142 | 1.2128 |
| 0.4293 | 1572.0 | 3144 | 1.2167 |
| 0.4293 | 1573.0 | 3146 | 1.2191 |
| 0.4293 | 1574.0 | 3148 | 1.2207 |
| 0.4293 | 1575.0 | 3150 | 1.2220 |
| 0.4293 | 1576.0 | 3152 | 1.2224 |
| 0.4293 | 1577.0 | 3154 | 1.2225 |
| 0.4293 | 1578.0 | 3156 | 1.2224 |
| 0.4293 | 1579.0 | 3158 | 1.2220 |
| 0.4293 | 1580.0 | 3160 | 1.2215 |
| 0.4293 | 1581.0 | 3162 | 1.2209 |
| 0.4293 | 1582.0 | 3164 | 1.2203 |
| 0.4293 | 1583.0 | 3166 | 1.2198 |
| 0.4293 | 1584.0 | 3168 | 1.2195 |
| 0.4293 | 1585.0 | 3170 | 1.2188 |
| 0.4293 | 1586.0 | 3172 | 1.2180 |
| 0.4293 | 1587.0 | 3174 | 1.2173 |
| 0.4293 | 1588.0 | 3176 | 1.2168 |
| 0.4293 | 1589.0 | 3178 | 1.2166 |
| 0.4293 | 1590.0 | 3180 | 1.2159 |
| 0.4293 | 1591.0 | 3182 | 1.2142 |
| 0.4293 | 1592.0 | 3184 | 1.2126 |
| 0.4293 | 1593.0 | 3186 | 1.2106 |
| 0.4293 | 1594.0 | 3188 | 1.2086 |
| 0.4293 | 1595.0 | 3190 | 1.2070 |
| 0.4293 | 1596.0 | 3192 | 1.2055 |
| 0.4293 | 1597.0 | 3194 | 1.2044 |
| 0.4293 | 1598.0 | 3196 | 1.2033 |
| 0.4293 | 1599.0 | 3198 | 1.2022 |
| 0.4293 | 1600.0 | 3200 | 1.2012 |
| 0.4293 | 1601.0 | 3202 | 1.2005 |
| 0.4293 | 1602.0 | 3204 | 1.2000 |
| 0.4293 | 1603.0 | 3206 | 1.1996 |
| 0.4293 | 1604.0 | 3208 | 1.1983 |
| 0.4293 | 1605.0 | 3210 | 1.1973 |
| 0.4293 | 1606.0 | 3212 | 1.1966 |
| 0.4293 | 1607.0 | 3214 | 1.1965 |
| 0.4293 | 1608.0 | 3216 | 1.1967 |
| 0.4293 | 1609.0 | 3218 | 1.1971 |
| 0.4293 | 1610.0 | 3220 | 1.1975 |
| 0.4293 | 1611.0 | 3222 | 1.1976 |
| 0.4293 | 1612.0 | 3224 | 1.1980 |
| 0.4293 | 1613.0 | 3226 | 1.1980 |
| 0.4293 | 1614.0 | 3228 | 1.1978 |
| 0.4293 | 1615.0 | 3230 | 1.1974 |
| 0.4293 | 1616.0 | 3232 | 1.1969 |
| 0.4293 | 1617.0 | 3234 | 1.1967 |
| 0.4293 | 1618.0 | 3236 | 1.1964 |
| 0.4293 | 1619.0 | 3238 | 1.1960 |
| 0.4293 | 1620.0 | 3240 | 1.1953 |
| 0.4293 | 1621.0 | 3242 | 1.1943 |
| 0.4293 | 1622.0 | 3244 | 1.1931 |
| 0.4293 | 1623.0 | 3246 | 1.1918 |
| 0.4293 | 1624.0 | 3248 | 1.1912 |
| 0.4293 | 1625.0 | 3250 | 1.1906 |
| 0.4293 | 1626.0 | 3252 | 1.1900 |
| 0.4293 | 1627.0 | 3254 | 1.1899 |
| 0.4293 | 1628.0 | 3256 | 1.1907 |
| 0.4293 | 1629.0 | 3258 | 1.1921 |
| 0.4293 | 1630.0 | 3260 | 1.1932 |
| 0.4293 | 1631.0 | 3262 | 1.1944 |
| 0.4293 | 1632.0 | 3264 | 1.1958 |
| 0.4293 | 1633.0 | 3266 | 1.1968 |
| 0.4293 | 1634.0 | 3268 | 1.1975 |
| 0.4293 | 1635.0 | 3270 | 1.1978 |
| 0.4293 | 1636.0 | 3272 | 1.1981 |
| 0.4293 | 1637.0 | 3274 | 1.1985 |
| 0.4293 | 1638.0 | 3276 | 1.1990 |
| 0.4293 | 1639.0 | 3278 | 1.1992 |
| 0.4293 | 1640.0 | 3280 | 1.1969 |
| 0.4293 | 1641.0 | 3282 | 1.1957 |
| 0.4293 | 1642.0 | 3284 | 1.1950 |
| 0.4293 | 1643.0 | 3286 | 1.1943 |
| 0.4293 | 1644.0 | 3288 | 1.1940 |
| 0.4293 | 1645.0 | 3290 | 1.1939 |
| 0.4293 | 1646.0 | 3292 | 1.1939 |
| 0.4293 | 1647.0 | 3294 | 1.1952 |
| 0.4293 | 1648.0 | 3296 | 1.1972 |
| 0.4293 | 1649.0 | 3298 | 1.1984 |
| 0.4293 | 1650.0 | 3300 | 1.1988 |
| 0.4293 | 1651.0 | 3302 | 1.1985 |
| 0.4293 | 1652.0 | 3304 | 1.1983 |
| 0.4293 | 1653.0 | 3306 | 1.1980 |
| 0.4293 | 1654.0 | 3308 | 1.1977 |
| 0.4293 | 1655.0 | 3310 | 1.1971 |
| 0.4293 | 1656.0 | 3312 | 1.2015 |
| 0.4293 | 1657.0 | 3314 | 1.2049 |
| 0.4293 | 1658.0 | 3316 | 1.2046 |
| 0.4293 | 1659.0 | 3318 | 1.2064 |
| 0.4293 | 1660.0 | 3320 | 1.2121 |
| 0.4293 | 1661.0 | 3322 | 1.2175 |
| 0.4293 | 1662.0 | 3324 | 1.2186 |
| 0.4293 | 1663.0 | 3326 | 1.2164 |
| 0.4293 | 1664.0 | 3328 | 1.2130 |
| 0.4293 | 1665.0 | 3330 | 1.2085 |
| 0.4293 | 1666.0 | 3332 | 1.2030 |
| 0.4293 | 1667.0 | 3334 | 1.1986 |
| 0.4293 | 1668.0 | 3336 | 1.1955 |
| 0.4293 | 1669.0 | 3338 | 1.1921 |
| 0.4293 | 1670.0 | 3340 | 1.1900 |
| 0.4293 | 1671.0 | 3342 | 1.1891 |
| 0.4293 | 1672.0 | 3344 | 1.1886 |
| 0.4293 | 1673.0 | 3346 | 1.1893 |
| 0.4293 | 1674.0 | 3348 | 1.1898 |
| 0.4293 | 1675.0 | 3350 | 1.1900 |
| 0.4293 | 1676.0 | 3352 | 1.1900 |
| 0.4293 | 1677.0 | 3354 | 1.1894 |
| 0.4293 | 1678.0 | 3356 | 1.1889 |
| 0.4293 | 1679.0 | 3358 | 1.1890 |
| 0.4293 | 1680.0 | 3360 | 1.1902 |
| 0.4293 | 1681.0 | 3362 | 1.1911 |
| 0.4293 | 1682.0 | 3364 | 1.1915 |
| 0.4293 | 1683.0 | 3366 | 1.1917 |
| 0.4293 | 1684.0 | 3368 | 1.1916 |
| 0.4293 | 1685.0 | 3370 | 1.1916 |
| 0.4293 | 1686.0 | 3372 | 1.1914 |
| 0.4293 | 1687.0 | 3374 | 1.1914 |
| 0.4293 | 1688.0 | 3376 | 1.1909 |
| 0.4293 | 1689.0 | 3378 | 1.1903 |
| 0.4293 | 1690.0 | 3380 | 1.1892 |
| 0.4293 | 1691.0 | 3382 | 1.1884 |
| 0.4293 | 1692.0 | 3384 | 1.1876 |
| 0.4293 | 1693.0 | 3386 | 1.1868 |
| 0.4293 | 1694.0 | 3388 | 1.1868 |
| 0.4293 | 1695.0 | 3390 | 1.1882 |
| 0.4293 | 1696.0 | 3392 | 1.1900 |
| 0.4293 | 1697.0 | 3394 | 1.1918 |
| 0.4293 | 1698.0 | 3396 | 1.1932 |
| 0.4293 | 1699.0 | 3398 | 1.1940 |
| 0.4293 | 1700.0 | 3400 | 1.1941 |
| 0.4293 | 1701.0 | 3402 | 1.1980 |
| 0.4293 | 1702.0 | 3404 | 1.2025 |
| 0.4293 | 1703.0 | 3406 | 1.2061 |
| 0.4293 | 1704.0 | 3408 | 1.2090 |
| 0.4293 | 1705.0 | 3410 | 1.2112 |
| 0.4293 | 1706.0 | 3412 | 1.2133 |
| 0.4293 | 1707.0 | 3414 | 1.2151 |
| 0.4293 | 1708.0 | 3416 | 1.2166 |
| 0.4293 | 1709.0 | 3418 | 1.2183 |
| 0.4293 | 1710.0 | 3420 | 1.2194 |
| 0.4293 | 1711.0 | 3422 | 1.2200 |
| 0.4293 | 1712.0 | 3424 | 1.2204 |
| 0.4293 | 1713.0 | 3426 | 1.2203 |
| 0.4293 | 1714.0 | 3428 | 1.2203 |
| 0.4293 | 1715.0 | 3430 | 1.2197 |
| 0.4293 | 1716.0 | 3432 | 1.2188 |
| 0.4293 | 1717.0 | 3434 | 1.2181 |
| 0.4293 | 1718.0 | 3436 | 1.2164 |
| 0.4293 | 1719.0 | 3438 | 1.2131 |
| 0.4293 | 1720.0 | 3440 | 1.2107 |
| 0.4293 | 1721.0 | 3442 | 1.2105 |
| 0.4293 | 1722.0 | 3444 | 1.2101 |
| 0.4293 | 1723.0 | 3446 | 1.2099 |
| 0.4293 | 1724.0 | 3448 | 1.2102 |
| 0.4293 | 1725.0 | 3450 | 1.2102 |
| 0.4293 | 1726.0 | 3452 | 1.2106 |
| 0.4293 | 1727.0 | 3454 | 1.2114 |
| 0.4293 | 1728.0 | 3456 | 1.2122 |
| 0.4293 | 1729.0 | 3458 | 1.2127 |
| 0.4293 | 1730.0 | 3460 | 1.2131 |
| 0.4293 | 1731.0 | 3462 | 1.2133 |
| 0.4293 | 1732.0 | 3464 | 1.2135 |
| 0.4293 | 1733.0 | 3466 | 1.2135 |
| 0.4293 | 1734.0 | 3468 | 1.2139 |
| 0.4293 | 1735.0 | 3470 | 1.2146 |
| 0.4293 | 1736.0 | 3472 | 1.2153 |
| 0.4293 | 1737.0 | 3474 | 1.2157 |
| 0.4293 | 1738.0 | 3476 | 1.2159 |
| 0.4293 | 1739.0 | 3478 | 1.2164 |
| 0.4293 | 1740.0 | 3480 | 1.2169 |
| 0.4293 | 1741.0 | 3482 | 1.2173 |
| 0.4293 | 1742.0 | 3484 | 1.2177 |
| 0.4293 | 1743.0 | 3486 | 1.2179 |
| 0.4293 | 1744.0 | 3488 | 1.2181 |
| 0.4293 | 1745.0 | 3490 | 1.2180 |
| 0.4293 | 1746.0 | 3492 | 1.2180 |
| 0.4293 | 1747.0 | 3494 | 1.2178 |
| 0.4293 | 1748.0 | 3496 | 1.2175 |
| 0.4293 | 1749.0 | 3498 | 1.2170 |
| 0.0013 | 1750.0 | 3500 | 1.2162 |
| 0.0013 | 1751.0 | 3502 | 1.2154 |
| 0.0013 | 1752.0 | 3504 | 1.2148 |
| 0.0013 | 1753.0 | 3506 | 1.2141 |
| 0.0013 | 1754.0 | 3508 | 1.2137 |
| 0.0013 | 1755.0 | 3510 | 1.2132 |
| 0.0013 | 1756.0 | 3512 | 1.2128 |
| 0.0013 | 1757.0 | 3514 | 1.2122 |
| 0.0013 | 1758.0 | 3516 | 1.2108 |
| 0.0013 | 1759.0 | 3518 | 1.2082 |
| 0.0013 | 1760.0 | 3520 | 1.2055 |
| 0.0013 | 1761.0 | 3522 | 1.2032 |
| 0.0013 | 1762.0 | 3524 | 1.2012 |
| 0.0013 | 1763.0 | 3526 | 1.1998 |
| 0.0013 | 1764.0 | 3528 | 1.1991 |
| 0.0013 | 1765.0 | 3530 | 1.1983 |
| 0.0013 | 1766.0 | 3532 | 1.1978 |
| 0.0013 | 1767.0 | 3534 | 1.1978 |
| 0.0013 | 1768.0 | 3536 | 1.1983 |
| 0.0013 | 1769.0 | 3538 | 1.1987 |
| 0.0013 | 1770.0 | 3540 | 1.1989 |
| 0.0013 | 1771.0 | 3542 | 1.1992 |
| 0.0013 | 1772.0 | 3544 | 1.1994 |
| 0.0013 | 1773.0 | 3546 | 1.1993 |
| 0.0013 | 1774.0 | 3548 | 1.1994 |
| 0.0013 | 1775.0 | 3550 | 1.1994 |
| 0.0013 | 1776.0 | 3552 | 1.1995 |
| 0.0013 | 1777.0 | 3554 | 1.1995 |
| 0.0013 | 1778.0 | 3556 | 1.1986 |
| 0.0013 | 1779.0 | 3558 | 1.1976 |
| 0.0013 | 1780.0 | 3560 | 1.1962 |
| 0.0013 | 1781.0 | 3562 | 1.1948 |
| 0.0013 | 1782.0 | 3564 | 1.1934 |
| 0.0013 | 1783.0 | 3566 | 1.1922 |
| 0.0013 | 1784.0 | 3568 | 1.1912 |
| 0.0013 | 1785.0 | 3570 | 1.1903 |
| 0.0013 | 1786.0 | 3572 | 1.1894 |
| 0.0013 | 1787.0 | 3574 | 1.1885 |
| 0.0013 | 1788.0 | 3576 | 1.1886 |
| 0.0013 | 1789.0 | 3578 | 1.1889 |
| 0.0013 | 1790.0 | 3580 | 1.1889 |
| 0.0013 | 1791.0 | 3582 | 1.1889 |
| 0.0013 | 1792.0 | 3584 | 1.1887 |
| 0.0013 | 1793.0 | 3586 | 1.1889 |
| 0.0013 | 1794.0 | 3588 | 1.1890 |
| 0.0013 | 1795.0 | 3590 | 1.1890 |
| 0.0013 | 1796.0 | 3592 | 1.1890 |
| 0.0013 | 1797.0 | 3594 | 1.1892 |
| 0.0013 | 1798.0 | 3596 | 1.1892 |
| 0.0013 | 1799.0 | 3598 | 1.1890 |
| 0.0013 | 1800.0 | 3600 | 1.1888 |
| 0.0013 | 1801.0 | 3602 | 1.1882 |
| 0.0013 | 1802.0 | 3604 | 1.1876 |
| 0.0013 | 1803.0 | 3606 | 1.1869 |
| 0.0013 | 1804.0 | 3608 | 1.1863 |
| 0.0013 | 1805.0 | 3610 | 1.1858 |
| 0.0013 | 1806.0 | 3612 | 1.1855 |
| 0.0013 | 1807.0 | 3614 | 1.1850 |
| 0.0013 | 1808.0 | 3616 | 1.1847 |
| 0.0013 | 1809.0 | 3618 | 1.1850 |
| 0.0013 | 1810.0 | 3620 | 1.1852 |
| 0.0013 | 1811.0 | 3622 | 1.1850 |
| 0.0013 | 1812.0 | 3624 | 1.1847 |
| 0.0013 | 1813.0 | 3626 | 1.1845 |
| 0.0013 | 1814.0 | 3628 | 1.1844 |
| 0.0013 | 1815.0 | 3630 | 1.1840 |
| 0.0013 | 1816.0 | 3632 | 1.1837 |
| 0.0013 | 1817.0 | 3634 | 1.1832 |
| 0.0013 | 1818.0 | 3636 | 1.1830 |
| 0.0013 | 1819.0 | 3638 | 1.1829 |
| 0.0013 | 1820.0 | 3640 | 1.1829 |
| 0.0013 | 1821.0 | 3642 | 1.1831 |
| 0.0013 | 1822.0 | 3644 | 1.1833 |
| 0.0013 | 1823.0 | 3646 | 1.1827 |
| 0.0013 | 1824.0 | 3648 | 1.1825 |
| 0.0013 | 1825.0 | 3650 | 1.1823 |
| 0.0013 | 1826.0 | 3652 | 1.1828 |
| 0.0013 | 1827.0 | 3654 | 1.1833 |
| 0.0013 | 1828.0 | 3656 | 1.1839 |
| 0.0013 | 1829.0 | 3658 | 1.1844 |
| 0.0013 | 1830.0 | 3660 | 1.1848 |
| 0.0013 | 1831.0 | 3662 | 1.1849 |
| 0.0013 | 1832.0 | 3664 | 1.1850 |
| 0.0013 | 1833.0 | 3666 | 1.1850 |
| 0.0013 | 1834.0 | 3668 | 1.1850 |
| 0.0013 | 1835.0 | 3670 | 1.1848 |
| 0.0013 | 1836.0 | 3672 | 1.1847 |
| 0.0013 | 1837.0 | 3674 | 1.1848 |
| 0.0013 | 1838.0 | 3676 | 1.1850 |
| 0.0013 | 1839.0 | 3678 | 1.1852 |
| 0.0013 | 1840.0 | 3680 | 1.1853 |
| 0.0013 | 1841.0 | 3682 | 1.1855 |
| 0.0013 | 1842.0 | 3684 | 1.1857 |
| 0.0013 | 1843.0 | 3686 | 1.1858 |
| 0.0013 | 1844.0 | 3688 | 1.1859 |
| 0.0013 | 1845.0 | 3690 | 1.1860 |
| 0.0013 | 1846.0 | 3692 | 1.1863 |
| 0.0013 | 1847.0 | 3694 | 1.1866 |
| 0.0013 | 1848.0 | 3696 | 1.1867 |
| 0.0013 | 1849.0 | 3698 | 1.1867 |
| 0.0013 | 1850.0 | 3700 | 1.1867 |
| 0.0013 | 1851.0 | 3702 | 1.1868 |
| 0.0013 | 1852.0 | 3704 | 1.1869 |
| 0.0013 | 1853.0 | 3706 | 1.1871 |
| 0.0013 | 1854.0 | 3708 | 1.1872 |
| 0.0013 | 1855.0 | 3710 | 1.1874 |
| 0.0013 | 1856.0 | 3712 | 1.1875 |
| 0.0013 | 1857.0 | 3714 | 1.1875 |
| 0.0013 | 1858.0 | 3716 | 1.1874 |
| 0.0013 | 1859.0 | 3718 | 1.1871 |
| 0.0013 | 1860.0 | 3720 | 1.1867 |
| 0.0013 | 1861.0 | 3722 | 1.1864 |
| 0.0013 | 1862.0 | 3724 | 1.1862 |
| 0.0013 | 1863.0 | 3726 | 1.1851 |
| 0.0013 | 1864.0 | 3728 | 1.1836 |
| 0.0013 | 1865.0 | 3730 | 1.1822 |
| 0.0013 | 1866.0 | 3732 | 1.1812 |
| 0.0013 | 1867.0 | 3734 | 1.1804 |
| 0.0013 | 1868.0 | 3736 | 1.1798 |
| 0.0013 | 1869.0 | 3738 | 1.1793 |
| 0.0013 | 1870.0 | 3740 | 1.1789 |
| 0.0013 | 1871.0 | 3742 | 1.1785 |
| 0.0013 | 1872.0 | 3744 | 1.1780 |
| 0.0013 | 1873.0 | 3746 | 1.1778 |
| 0.0013 | 1874.0 | 3748 | 1.1776 |
| 0.0013 | 1875.0 | 3750 | 1.1775 |
| 0.0013 | 1876.0 | 3752 | 1.1774 |
| 0.0013 | 1877.0 | 3754 | 1.1774 |
| 0.0013 | 1878.0 | 3756 | 1.1773 |
| 0.0013 | 1879.0 | 3758 | 1.1771 |
| 0.0013 | 1880.0 | 3760 | 1.1769 |
| 0.0013 | 1881.0 | 3762 | 1.1769 |
| 0.0013 | 1882.0 | 3764 | 1.1769 |
| 0.0013 | 1883.0 | 3766 | 1.1769 |
| 0.0013 | 1884.0 | 3768 | 1.1770 |
| 0.0013 | 1885.0 | 3770 | 1.1772 |
| 0.0013 | 1886.0 | 3772 | 1.1774 |
| 0.0013 | 1887.0 | 3774 | 1.1774 |
| 0.0013 | 1888.0 | 3776 | 1.1778 |
| 0.0013 | 1889.0 | 3778 | 1.1779 |
| 0.0013 | 1890.0 | 3780 | 1.1779 |
| 0.0013 | 1891.0 | 3782 | 1.1776 |
| 0.0013 | 1892.0 | 3784 | 1.1772 |
| 0.0013 | 1893.0 | 3786 | 1.1768 |
| 0.0013 | 1894.0 | 3788 | 1.1765 |
| 0.0013 | 1895.0 | 3790 | 1.1761 |
| 0.0013 | 1896.0 | 3792 | 1.1758 |
| 0.0013 | 1897.0 | 3794 | 1.1755 |
| 0.0013 | 1898.0 | 3796 | 1.1753 |
| 0.0013 | 1899.0 | 3798 | 1.1753 |
| 0.0013 | 1900.0 | 3800 | 1.1752 |
| 0.0013 | 1901.0 | 3802 | 1.1752 |
| 0.0013 | 1902.0 | 3804 | 1.1754 |
| 0.0013 | 1903.0 | 3806 | 1.1756 |
| 0.0013 | 1904.0 | 3808 | 1.1756 |
| 0.0013 | 1905.0 | 3810 | 1.1754 |
| 0.0013 | 1906.0 | 3812 | 1.1753 |
| 0.0013 | 1907.0 | 3814 | 1.1752 |
| 0.0013 | 1908.0 | 3816 | 1.1751 |
| 0.0013 | 1909.0 | 3818 | 1.1751 |
| 0.0013 | 1910.0 | 3820 | 1.1752 |
| 0.0013 | 1911.0 | 3822 | 1.1754 |
| 0.0013 | 1912.0 | 3824 | 1.1755 |
| 0.0013 | 1913.0 | 3826 | 1.1755 |
| 0.0013 | 1914.0 | 3828 | 1.1756 |
| 0.0013 | 1915.0 | 3830 | 1.1756 |
| 0.0013 | 1916.0 | 3832 | 1.1756 |
| 0.0013 | 1917.0 | 3834 | 1.1759 |
| 0.0013 | 1918.0 | 3836 | 1.1763 |
| 0.0013 | 1919.0 | 3838 | 1.1765 |
| 0.0013 | 1920.0 | 3840 | 1.1767 |
| 0.0013 | 1921.0 | 3842 | 1.1768 |
| 0.0013 | 1922.0 | 3844 | 1.1769 |
| 0.0013 | 1923.0 | 3846 | 1.1769 |
| 0.0013 | 1924.0 | 3848 | 1.1768 |
| 0.0013 | 1925.0 | 3850 | 1.1768 |
| 0.0013 | 1926.0 | 3852 | 1.1768 |
| 0.0013 | 1927.0 | 3854 | 1.1768 |
| 0.0013 | 1928.0 | 3856 | 1.1768 |
| 0.0013 | 1929.0 | 3858 | 1.1769 |
| 0.0013 | 1930.0 | 3860 | 1.1768 |
| 0.0013 | 1931.0 | 3862 | 1.1768 |
| 0.0013 | 1932.0 | 3864 | 1.1767 |
| 0.0013 | 1933.0 | 3866 | 1.1766 |
| 0.0013 | 1934.0 | 3868 | 1.1765 |
| 0.0013 | 1935.0 | 3870 | 1.1764 |
| 0.0013 | 1936.0 | 3872 | 1.1763 |
| 0.0013 | 1937.0 | 3874 | 1.1762 |
| 0.0013 | 1938.0 | 3876 | 1.1761 |
| 0.0013 | 1939.0 | 3878 | 1.1760 |
| 0.0013 | 1940.0 | 3880 | 1.1759 |
| 0.0013 | 1941.0 | 3882 | 1.1759 |
| 0.0013 | 1942.0 | 3884 | 1.1758 |
| 0.0013 | 1943.0 | 3886 | 1.1759 |
| 0.0013 | 1944.0 | 3888 | 1.1760 |
| 0.0013 | 1945.0 | 3890 | 1.1761 |
| 0.0013 | 1946.0 | 3892 | 1.1763 |
| 0.0013 | 1947.0 | 3894 | 1.1765 |
| 0.0013 | 1948.0 | 3896 | 1.1766 |
| 0.0013 | 1949.0 | 3898 | 1.1767 |
| 0.0013 | 1950.0 | 3900 | 1.1769 |
| 0.0013 | 1951.0 | 3902 | 1.1770 |
| 0.0013 | 1952.0 | 3904 | 1.1770 |
| 0.0013 | 1953.0 | 3906 | 1.1771 |
| 0.0013 | 1954.0 | 3908 | 1.1774 |
| 0.0013 | 1955.0 | 3910 | 1.1776 |
| 0.0013 | 1956.0 | 3912 | 1.1777 |
| 0.0013 | 1957.0 | 3914 | 1.1777 |
| 0.0013 | 1958.0 | 3916 | 1.1778 |
| 0.0013 | 1959.0 | 3918 | 1.1775 |
| 0.0013 | 1960.0 | 3920 | 1.1772 |
| 0.0013 | 1961.0 | 3922 | 1.1769 |
| 0.0013 | 1962.0 | 3924 | 1.1768 |
| 0.0013 | 1963.0 | 3926 | 1.1767 |
| 0.0013 | 1964.0 | 3928 | 1.1767 |
| 0.0013 | 1965.0 | 3930 | 1.1766 |
| 0.0013 | 1966.0 | 3932 | 1.1766 |
| 0.0013 | 1967.0 | 3934 | 1.1766 |
| 0.0013 | 1968.0 | 3936 | 1.1765 |
| 0.0013 | 1969.0 | 3938 | 1.1765 |
| 0.0013 | 1970.0 | 3940 | 1.1764 |
| 0.0013 | 1971.0 | 3942 | 1.1765 |
| 0.0013 | 1972.0 | 3944 | 1.1765 |
| 0.0013 | 1973.0 | 3946 | 1.1765 |
| 0.0013 | 1974.0 | 3948 | 1.1765 |
| 0.0013 | 1975.0 | 3950 | 1.1765 |
| 0.0013 | 1976.0 | 3952 | 1.1765 |
| 0.0013 | 1977.0 | 3954 | 1.1764 |
| 0.0013 | 1978.0 | 3956 | 1.1764 |
| 0.0013 | 1979.0 | 3958 | 1.1765 |
| 0.0013 | 1980.0 | 3960 | 1.1765 |
| 0.0013 | 1981.0 | 3962 | 1.1765 |
| 0.0013 | 1982.0 | 3964 | 1.1765 |
| 0.0013 | 1983.0 | 3966 | 1.1765 |
| 0.0013 | 1984.0 | 3968 | 1.1765 |
| 0.0013 | 1985.0 | 3970 | 1.1765 |
| 0.0013 | 1986.0 | 3972 | 1.1765 |
| 0.0013 | 1987.0 | 3974 | 1.1765 |
| 0.0013 | 1988.0 | 3976 | 1.1765 |
| 0.0013 | 1989.0 | 3978 | 1.1764 |
| 0.0013 | 1990.0 | 3980 | 1.1764 |
| 0.0013 | 1991.0 | 3982 | 1.1764 |
| 0.0013 | 1992.0 | 3984 | 1.1764 |
| 0.0013 | 1993.0 | 3986 | 1.1765 |
| 0.0013 | 1994.0 | 3988 | 1.1765 |
| 0.0013 | 1995.0 | 3990 | 1.1765 |
| 0.0013 | 1996.0 | 3992 | 1.1765 |
| 0.0013 | 1997.0 | 3994 | 1.1765 |
| 0.0013 | 1998.0 | 3996 | 1.1765 |
| 0.0013 | 1999.0 | 3998 | 1.1765 |
| 0.0012 | 2000.0 | 4000 | 1.1765 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_64_0.05_8_0.0002
|
ferrazzipietro
| 2024-03-07T20:11:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T15:11:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Maqqq/OpenHermes-2.5-Mistral-7B-12
|
Maqqq
| 2024-03-07T20:08:24Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T17:44:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TikhonRadkevich/dqn-SpaceInvadersNoFrameskip-v4
|
TikhonRadkevich
| 2024-03-07T20:08:21Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T20:07:43Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 658.00 +/- 197.44
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TikhonRadkevich -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga TikhonRadkevich -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga TikhonRadkevich
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
OwOOwO/eacc_bmk5
|
OwOOwO
| 2024-03-07T20:07:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T20:05:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tsavage68/mistralit2_1000_STEPS_5e7_SFT
|
tsavage68
| 2024-03-07T19:56:56Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:51:38Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: mistralit2_1000_STEPS_SFT_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistralit2_1000_STEPS_SFT_SFT
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.498 | 0.1 | 50 | 0.3952 |
| 0.3309 | 0.2 | 100 | 0.3213 |
| 0.3234 | 0.29 | 150 | 0.3104 |
| 0.2953 | 0.39 | 200 | 0.3048 |
| 0.2967 | 0.49 | 250 | 0.3005 |
| 0.3047 | 0.59 | 300 | 0.2972 |
| 0.2869 | 0.68 | 350 | 0.2943 |
| 0.2912 | 0.78 | 400 | 0.2913 |
| 0.2859 | 0.88 | 450 | 0.2895 |
| 0.2941 | 0.98 | 500 | 0.2880 |
| 0.2412 | 1.07 | 550 | 0.2886 |
| 0.2637 | 1.17 | 600 | 0.2884 |
| 0.2627 | 1.27 | 650 | 0.2882 |
| 0.2443 | 1.37 | 700 | 0.2881 |
| 0.2557 | 1.46 | 750 | 0.2877 |
| 0.259 | 1.56 | 800 | 0.2876 |
| 0.2598 | 1.66 | 850 | 0.2875 |
| 0.2633 | 1.76 | 900 | 0.2876 |
| 0.2727 | 1.86 | 950 | 0.2876 |
| 0.2674 | 1.95 | 1000 | 0.2876 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.0.0+cu117
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_bmk4
|
OwOOwO
| 2024-03-07T19:56:08Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:53:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TrevorDohm/ViT_Scratch_MNIST
|
TrevorDohm
| 2024-03-07T19:56:01Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2024-03-07T19:14:33Z |
---
license: openrail
---
Trained From Scratch, MNIST https://huggingface.co/datasets/mnist
Guide: https://medium.com/mlearning-ai/vision-transformers-from-scratch-pytorch-a-step-by-step-guide-96c3313c2e0c
ViT_Small: {"chw": (1, 28, 28), "n_patches": 7, "n_blocks": 4, "hidden_d": 8, "n_heads": 4, "out_d": 10} 23 kB 2K+ Steps
ViT_Large: {"chw": (1, 28, 28), "n_patches": 7, "n_blocks": 6, "hidden_d": 64, "n_heads": 8, "out_d": 10} 881 kB 20K+ Steps





|
keenGol/emotions_NLP_workshop
|
keenGol
| 2024-03-07T19:54:30Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T15:40:51Z |
---
pipeline_tag: text-classification
---
|
biololab/tinyllama-symptom-extractor_4bit
|
biololab
| 2024-03-07T19:53:16Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T19:52:53Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/tinyllama-bnb-4bit
---
# Uploaded model
- **Developed by:** biololab
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_64_0.05_4_0.0002
|
ferrazzipietro
| 2024-03-07T19:52:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T14:52:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roushan255/test255
|
roushan255
| 2024-03-07T19:52:01Z | 0 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-03-07T19:42:52Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
DanielClough/Candle_Puffin-Phi-v2
|
DanielClough
| 2024-03-07T19:49:20Z | 22 | 0 |
transformers
|
[
"transformers",
"gguf",
"mixformer-sequential",
"text-generation",
"custom_code",
"en",
"dataset:teknium/Puffin-Phi-v2",
"license:mit",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-26T05:21:31Z |
---
datasets:
- teknium/Puffin-Phi-v2
language:
- en
pipeline_tag: text-generation
license: mit
---
This repo includes `.gguf` built for HuggingFace/Candle.
They will not work with `llama.cpp`.
Refer to the [original repo](https://huggingface.co/teknium/Puffin-Phi-v2) for more details.
|
OwOOwO/eacc_bmk
|
OwOOwO
| 2024-03-07T19:40:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:38:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arcee-ai/Hermes-Mistral-Saul-Slerp
|
arcee-ai
| 2024-03-07T19:40:41Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"Equall/Saul-Instruct-v1",
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:44:47Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- Equall/Saul-Instruct-v1
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
---
# Hermes-Mistral-Saul-Slerp
Hermes-Mistral-Saul-Slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Equall/Saul-Instruct-v1](https://huggingface.co/Equall/Saul-Instruct-v1)
* [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Equall/Saul-Instruct-v1
layer_range: [0, 32]
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
flyingfishinwater/starcoder2-3b-instruct-gguf
|
flyingfishinwater
| 2024-03-07T19:36:44Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"starcoder2",
"text-generation",
"license:bigcode-openrail-m",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:31:21Z |
---
tags:
- code
- starcoder2
library_name: transformers
pipeline_tag: text-generation
license: bigcode-openrail-m
---
# GGUF version of starcoder2-instruct
The base model is: [https://huggingface.co/TechxGenus/starcoder2-3b-instruct](https://huggingface.co/TechxGenus/starcoder2-3b-instruct)
Refer to the following instruction
<p align="center">
<img width="300px" alt="starcoder2-instruct" src="https://huggingface.co/TechxGenus/starcoder2-3b-instruct/resolve/main/starcoder2-instruct.jpg">
</p>
### starcoder2-instruct
We've fine-tuned starcoder2-3b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves **65.9 pass@1** on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
### Usage
Here give some examples of how to use our model:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-3b-instruct")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/starcoder2-3b-instruct",
torch_dtype=torch.bfloat16,
device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))
```
With text-generation pipeline:
```python
from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
model="TechxGenus/starcoder2-3b-instruct",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])
```
### Note
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
|
mehrzad-shahin/aec-ner-distilbert-base
|
mehrzad-shahin
| 2024-03-07T19:35:20Z | 93 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"token-classification",
"ner",
"named-entity-recognition",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-02-28T20:31:56Z |
---
license: mit
tags:
- token-classification
- ner
- named-entity-recognition
pipeline_tag: token-classification
widget:
- text: All air terminals in the 5th to 7th floor were inspected.
example_title: Example 1
- text: Baseboard heaters in the utility room are installed.
example_title: Example 2
---
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_32_0.01_16_0.0002
|
ferrazzipietro
| 2024-03-07T19:33:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T14:33:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RodMed0709/my_awesome_billsum_model
|
RodMed0709
| 2024-03-07T19:24:45Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-07T19:19:06Z |
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5440
- Rouge1: 0.1415
- Rouge2: 0.0479
- Rougel: 0.1163
- Rougelsum: 0.1166
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8342 | 0.1253 | 0.0329 | 0.1044 | 0.1045 | 19.0 |
| No log | 2.0 | 124 | 2.6247 | 0.1354 | 0.0424 | 0.1117 | 0.1119 | 19.0 |
| No log | 3.0 | 186 | 2.5622 | 0.1414 | 0.0497 | 0.1169 | 0.1172 | 19.0 |
| No log | 4.0 | 248 | 2.5440 | 0.1415 | 0.0479 | 0.1163 | 0.1166 | 19.0 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Supreeth40/finetuned-bart-xsum
|
Supreeth40
| 2024-03-07T19:23:21Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-07T14:53:06Z |
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: finetuned-bart-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bart-xsum
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8225 | 0.56 | 1000 | 0.4618 |
| 0.4988 | 1.11 | 2000 | 0.4510 |
| 0.4358 | 1.67 | 3000 | 0.4439 |
| 0.4073 | 2.22 | 4000 | 0.4435 |
| 0.3748 | 2.78 | 5000 | 0.4392 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ambrosfitz/tinyllama-history-chat_v0.1ps
|
ambrosfitz
| 2024-03-07T19:22:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"education - history",
"conversational",
"en",
"dataset:ambrosfitz/ps_data",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T03:58:28Z |
---
library_name: transformers
tags:
- llama
- education - history
license: apache-2.0
datasets:
- ambrosfitz/ps_data
language:
- en
pipeline_tag: text-generation
---
|
JiunYi/gemma-Code-Instruct-Finetune-test
|
JiunYi
| 2024-03-07T19:19:17Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T19:14:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s14pe/a2c-PandaPickAndPlace-v3
|
s14pe
| 2024-03-07T19:15:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T19:11:21Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -45.00 +/- 15.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ritwikraha/comics_style_LoRA
|
ritwikraha
| 2024-03-07T19:05:27Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"art",
"code",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"license:cc-by-2.0",
"region:us"
] | null | 2024-03-07T18:05:08Z |
---
license: cc-by-2.0
library_name: diffusers
tags:
- art
- code
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
# SDXL LoRA DreamBooth - comic_style_LoRA
<Gallery />
| Image 1 | Image 2 |
|---|---|
|  |  |
| Image 3 | Image 4 |
|---|---|
|  |  |
## Model description
These are comic_style LoRA adaption weights for `stabilityai/stable-diffusion-xl-base-1.0`.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled.
Special VAE used for training: `madebyollin/sdxl-vae-fp16-fix`.
DataSet: custom hand-drawn sketches by [ritwikraha](https://www.ritwikraha.com/)
## Trigger words
You should use a photo in the style of TOK comics to trigger the image generation.
## Usage
```
!pip install diffusers accelerate -q
import torch
from PIL import Image
from diffusers import DiffusionPipeline, AutoencoderKL
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae,
torch_dtype=torch.float16,
variant="fp16",
use_safetensors=True
)
pipe.load_lora_weights('ritwikraha/comics_style_LoRA')
_ = pipe.to("cuda")
prompt = "a photo of 18th century London in the style of TOK comics, 8k"
negative_prompt ="ugly face, multiple bodies, bad anatomy, disfigured, extra fingers"
image = pipe(prompt=prompt,
negative_prompt=negative_prompt,
guidance_scale=3,
num_inference_steps=50).images[0]
image
```
## Download model
Weights for this model are available in Safetensors format.
[Download](ritwikraha/comics_style_LoRA/tree/main) them in the Files & versions tab.
---
|
deepnet/SN6-30M2
|
deepnet
| 2024-03-07T18:56:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:50:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Weni/ZeroShot-3.3.30-Mistral-7b-Multilanguage-3.2.0-merged
|
Weni
| 2024-03-07T18:53:33Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T18:43:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
xanderios/first-model
|
xanderios
| 2024-03-07T18:49:23Z | 0 | 0 | null |
[
"code",
"text-classification",
"en",
"dataset:xanderios/linkedin-job-postings",
"license:mit",
"region:us"
] |
text-classification
| 2024-03-07T08:19:30Z |
---
license: mit
datasets:
- xanderios/linkedin-job-postings
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- code
---
|
Humaid-alblooshi/bert-train-6layer-optimized-5-epoch
|
Humaid-alblooshi
| 2024-03-07T18:49:10Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T18:49:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deepnet/SN6-77G2
|
deepnet
| 2024-03-07T18:38:38Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T12:15:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ferrazzipietro/Qwen1.5-14B-Chat__adapters_en.layer1_4_torch.bfloat16_16_32_0.05_16_0.0002
|
ferrazzipietro
| 2024-03-07T18:38:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T13:36:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Locutusque/Hyperion-1.5-Mistral-7B
|
Locutusque
| 2024-03-07T18:30:24Z | 107 | 9 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:Locutusque/hyperion-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-02T19:22:02Z |
---
license: apache-2.0
library_name: transformers
tags:
- conversational
datasets:
- Locutusque/hyperion-v1.5
model-index:
- name: Hyperion-1.5-Mistral-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.49
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.64
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 41.78
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.61
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 40.49
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Locutusque/Hyperion-1.5-Mistral-7B
name: Open LLM Leaderboard
---
# Model Card for Locutusque/Hyperion-1.5-Mistral-7B

## Model Details
**Model Name**: Locutusque/Hyperion-1.5-Mistral-7B
**Base Model**: mistralai/Mistral-7B-v0.1
**Publisher**: M4-ai
**Model Type**: Question answering, conversational AI, code generation, medical text comprehension, mathematical reasoning, logical reasoning.
**Language**: Multi-domain, English language.
**License**: Apache-2.0
## Model Description
`Locutusque/Hyperion-1.5-Mistral-7B` is a state-of-the-art language model fine-tuned on the Hyperion dataset for advanced reasoning across scientific domains. This model is designed to handle complex inquiries and instructions, leveraging the diverse and rich information contained in the Hyperion dataset. Its primary use cases include but are not limited to complex question answering, conversational understanding, code generation, medical text comprehension, mathematical reasoning, and logical reasoning.
## Intended Use
This model is intended for researchers and practitioners looking for a powerful tool to tackle challenging problems in scientific domains. It can be used in the following scenarios:
- AI-driven tutoring systems for science, medicine, mathematics, and computer science.
- Assistive tools for professionals requiring fast and accurate domain-specific information retrieval.
- Platforms that require conversational AI capabilities with a focus on technical and scientific reasoning.
- Automation in code generation and understanding complex programming context.
## Training Data
The `Locutusque/Hyperion-1.5-Mistral-7B` model was fine-tuned on the Hyperion-v1.5 dataset, which amalgamates various datasets rich in diversity and complexity, including programming, medical texts, mathematical problems, and reasoning tasks.
## Evaluation Results
Coming soon...
## How to Use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Locutusque/Hyperion-1.5-Mistral-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# For a text generation task
input_text = "<|im_start|>user\nWhat are the implications of Einstein's theory of relativity in modern physics?<|im_end|>\n<|im_start|>assistant\n"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate a response
outputs = model.generate(input_ids, max_length=200, num_return_sequences=1, temperature=0.8, top_p=0.95, top_k=40, repetition_penalty=1.1)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Known Limitations
The diversity of the dataset could lead to inconsistencies in the model's responses due to variations in data formatting and annotation quality.
## Licensing Information
This model is released under the Apache-2.0 license.
## Citation Information
If you use Locutusque/Hyperion-1.5-Mistral-7B in your research, please cite the Hyperion dataset as follows:
```
@misc{sebastian_gabarain_2024,
title = {Hyperion-1.5: Illuminating the Path to Advanced Reasoning with a High-Quality, Multidisciplinary Question Answering Dataset},
author = {Sebastian Gabarain},
publisher = {HuggingFace},
year = {2024},
url = {https://huggingface.co/datasets/Locutusque/hyperion-v1.5}
}
```
## Quants
exl2 and GGUF by bartowski - https://huggingface.co/bartowski/Hyperion-1.5-Mistral-7B-exl2 https://huggingface.co/bartowski/Hyperion-1.5-Mistral-7B-GGUF
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Hyperion-1.5-Mistral-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |61.43|
|AI2 Reasoning Challenge (25-Shot)|60.49|
|HellaSwag (10-Shot) |83.64|
|MMLU (5-Shot) |63.57|
|TruthfulQA (0-shot) |41.78|
|Winogrande (5-shot) |78.61|
|GSM8k (5-shot) |40.49|
|
AhmedKaisar/bert-ner
|
AhmedKaisar
| 2024-03-07T18:26:52Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-29T13:45:57Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9047308319738988
- name: Recall
type: recall
value: 0.9333557724671828
- name: F1
type: f1
value: 0.9188204108681245
- name: Accuracy
type: accuracy
value: 0.9824424559957614
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0326
- Precision: 0.9047
- Recall: 0.9334
- F1: 0.9188
- Accuracy: 0.9824
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0349 | 1.0 | 1756 | 0.0326 | 0.9047 | 0.9334 | 0.9188 | 0.9824 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.2
- Datasets 2.12.0
- Tokenizers 0.13.2
|
sanbongazin/willgpt-Gemma_v2
|
sanbongazin
| 2024-03-07T18:24:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T18:24:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sujayC66/t5-base-finetuned-stocknews_2000_150
|
sujayC66
| 2024-03-07T18:21:12Z | 18 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-07T07:52:03Z |
---
license: apache-2.0
base_model: google-t5/t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-stocknews_2000_150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-stocknews_2000_150
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5246
- Rouge1: 41.1174
- Rouge2: 36.4917
- Rougel: 40.2739
- Rougelsum: 40.5043
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 211 | 0.4220 | 37.4081 | 29.7287 | 35.6792 | 36.0611 | 19.0 |
| No log | 2.0 | 422 | 0.4020 | 37.6979 | 30.5377 | 36.0747 | 36.4168 | 19.0 |
| 0.3832 | 3.0 | 633 | 0.3947 | 38.258 | 31.0862 | 36.5414 | 37.0213 | 19.0 |
| 0.3832 | 4.0 | 844 | 0.3850 | 38.4834 | 31.3747 | 36.8077 | 37.2317 | 19.0 |
| 0.2939 | 5.0 | 1055 | 0.3765 | 38.8131 | 32.3372 | 37.3919 | 37.7305 | 19.0 |
| 0.2939 | 6.0 | 1266 | 0.3762 | 39.1749 | 33.0152 | 37.6824 | 38.0201 | 19.0 |
| 0.2939 | 7.0 | 1477 | 0.3569 | 39.2336 | 32.9984 | 37.8439 | 38.1723 | 19.0 |
| 0.2511 | 8.0 | 1688 | 0.3551 | 39.452 | 33.6999 | 38.3731 | 38.5895 | 19.0 |
| 0.2511 | 9.0 | 1899 | 0.3523 | 39.8924 | 34.2746 | 38.6913 | 38.9944 | 19.0 |
| 0.2532 | 10.0 | 2110 | 0.3487 | 39.9155 | 34.2762 | 38.8052 | 39.077 | 19.0 |
| 0.2532 | 11.0 | 2321 | 0.3533 | 39.7805 | 34.2195 | 38.6591 | 38.9007 | 19.0 |
| 0.2158 | 12.0 | 2532 | 0.3529 | 39.6286 | 34.2772 | 38.5553 | 38.8225 | 19.0 |
| 0.2158 | 13.0 | 2743 | 0.3506 | 40.1899 | 35.0527 | 39.2227 | 39.4969 | 19.0 |
| 0.2158 | 14.0 | 2954 | 0.3474 | 40.666 | 35.5759 | 39.6311 | 39.9267 | 19.0 |
| 0.1882 | 15.0 | 3165 | 0.3488 | 40.4267 | 35.2551 | 39.2486 | 39.5608 | 19.0 |
| 0.1882 | 16.0 | 3376 | 0.3547 | 40.6478 | 35.5519 | 39.6034 | 39.8449 | 19.0 |
| 0.1612 | 17.0 | 3587 | 0.3616 | 40.7061 | 35.8348 | 39.8034 | 40.0508 | 19.0 |
| 0.1612 | 18.0 | 3798 | 0.3621 | 40.7052 | 35.8514 | 39.7689 | 40.0123 | 19.0 |
| 0.1434 | 19.0 | 4009 | 0.3632 | 40.5196 | 35.649 | 39.5977 | 39.8099 | 19.0 |
| 0.1434 | 20.0 | 4220 | 0.3667 | 40.8356 | 35.9832 | 39.9295 | 40.1647 | 19.0 |
| 0.1434 | 21.0 | 4431 | 0.3711 | 40.75 | 35.7893 | 39.7533 | 40.0671 | 19.0 |
| 0.1248 | 22.0 | 4642 | 0.3714 | 40.6404 | 35.8139 | 39.6508 | 39.9206 | 19.0 |
| 0.1248 | 23.0 | 4853 | 0.3720 | 40.596 | 35.7999 | 39.7515 | 39.9484 | 19.0 |
| 0.1097 | 24.0 | 5064 | 0.3766 | 40.6635 | 35.8029 | 39.8031 | 40.023 | 19.0 |
| 0.1097 | 25.0 | 5275 | 0.3841 | 40.6312 | 35.7811 | 39.7593 | 40.0159 | 19.0 |
| 0.1097 | 26.0 | 5486 | 0.3874 | 40.6912 | 35.85 | 39.7479 | 40.0379 | 19.0 |
| 0.0994 | 27.0 | 5697 | 0.3840 | 40.7263 | 35.9777 | 39.8711 | 40.1549 | 19.0 |
| 0.0994 | 28.0 | 5908 | 0.3935 | 40.7512 | 35.8443 | 39.7654 | 40.052 | 19.0 |
| 0.0877 | 29.0 | 6119 | 0.3942 | 40.801 | 35.9741 | 39.8594 | 40.0986 | 19.0 |
| 0.0877 | 30.0 | 6330 | 0.3977 | 40.9239 | 36.1363 | 40.0563 | 40.319 | 19.0 |
| 0.0786 | 31.0 | 6541 | 0.4009 | 40.8977 | 36.1534 | 40.0016 | 40.2385 | 19.0 |
| 0.0786 | 32.0 | 6752 | 0.3996 | 40.7816 | 36.1552 | 39.9214 | 40.1717 | 19.0 |
| 0.0786 | 33.0 | 6963 | 0.4023 | 40.9965 | 36.3464 | 40.1217 | 40.3481 | 19.0 |
| 0.0723 | 34.0 | 7174 | 0.4086 | 40.8352 | 36.1049 | 39.8852 | 40.142 | 19.0 |
| 0.0723 | 35.0 | 7385 | 0.4048 | 40.9399 | 36.2465 | 40.0545 | 40.3178 | 19.0 |
| 0.0654 | 36.0 | 7596 | 0.4097 | 40.9975 | 36.2784 | 40.0802 | 40.3726 | 19.0 |
| 0.0654 | 37.0 | 7807 | 0.4117 | 40.851 | 36.1677 | 40.0313 | 40.3027 | 19.0 |
| 0.0592 | 38.0 | 8018 | 0.4164 | 40.9427 | 36.2783 | 40.1323 | 40.4087 | 19.0 |
| 0.0592 | 39.0 | 8229 | 0.4187 | 40.6632 | 36.0088 | 39.8049 | 40.0361 | 19.0 |
| 0.0592 | 40.0 | 8440 | 0.4188 | 41.008 | 36.3243 | 40.1924 | 40.466 | 19.0 |
| 0.0557 | 41.0 | 8651 | 0.4244 | 40.887 | 36.2373 | 40.0544 | 40.3017 | 19.0 |
| 0.0557 | 42.0 | 8862 | 0.4219 | 40.8024 | 36.1323 | 39.9768 | 40.2685 | 19.0 |
| 0.0516 | 43.0 | 9073 | 0.4234 | 40.7758 | 36.1291 | 39.9284 | 40.1658 | 19.0 |
| 0.0516 | 44.0 | 9284 | 0.4268 | 40.8067 | 36.1192 | 39.9735 | 40.212 | 19.0 |
| 0.0516 | 45.0 | 9495 | 0.4229 | 40.8445 | 36.0577 | 39.9435 | 40.1851 | 19.0 |
| 0.0473 | 46.0 | 9706 | 0.4343 | 40.7118 | 36.1068 | 39.9453 | 40.1875 | 19.0 |
| 0.0473 | 47.0 | 9917 | 0.4311 | 40.7688 | 36.0953 | 39.9612 | 40.1921 | 19.0 |
| 0.0438 | 48.0 | 10128 | 0.4376 | 40.9327 | 36.2236 | 40.0164 | 40.2675 | 19.0 |
| 0.0438 | 49.0 | 10339 | 0.4360 | 41.0039 | 36.3548 | 40.0958 | 40.3716 | 19.0 |
| 0.0408 | 50.0 | 10550 | 0.4418 | 40.9386 | 36.3116 | 40.0052 | 40.2586 | 19.0 |
| 0.0408 | 51.0 | 10761 | 0.4436 | 41.0744 | 36.421 | 40.1518 | 40.4014 | 19.0 |
| 0.0408 | 52.0 | 10972 | 0.4427 | 41.1198 | 36.4495 | 40.2116 | 40.4505 | 19.0 |
| 0.0382 | 53.0 | 11183 | 0.4428 | 41.0544 | 36.4075 | 40.1852 | 40.4269 | 19.0 |
| 0.0382 | 54.0 | 11394 | 0.4468 | 41.0366 | 36.3513 | 40.1403 | 40.361 | 19.0 |
| 0.0354 | 55.0 | 11605 | 0.4463 | 40.9558 | 36.3748 | 40.1348 | 40.3447 | 19.0 |
| 0.0354 | 56.0 | 11816 | 0.4508 | 40.8857 | 36.3143 | 40.0455 | 40.2318 | 19.0 |
| 0.0338 | 57.0 | 12027 | 0.4544 | 40.8272 | 36.244 | 40.0023 | 40.2384 | 19.0 |
| 0.0338 | 58.0 | 12238 | 0.4555 | 40.9537 | 36.1908 | 40.0228 | 40.2483 | 19.0 |
| 0.0338 | 59.0 | 12449 | 0.4521 | 40.9258 | 36.1708 | 40.0611 | 40.3071 | 19.0 |
| 0.031 | 60.0 | 12660 | 0.4555 | 40.8837 | 36.147 | 40.0305 | 40.2382 | 19.0 |
| 0.031 | 61.0 | 12871 | 0.4566 | 40.9297 | 36.2576 | 40.09 | 40.2747 | 19.0 |
| 0.0307 | 62.0 | 13082 | 0.4562 | 40.8585 | 36.2582 | 40.0722 | 40.25 | 19.0 |
| 0.0307 | 63.0 | 13293 | 0.4592 | 40.9201 | 36.2751 | 40.0861 | 40.3269 | 19.0 |
| 0.0281 | 64.0 | 13504 | 0.4567 | 40.9232 | 36.2481 | 40.0753 | 40.3216 | 19.0 |
| 0.0281 | 65.0 | 13715 | 0.4606 | 41.0077 | 36.3489 | 40.1395 | 40.3744 | 19.0 |
| 0.0281 | 66.0 | 13926 | 0.4649 | 41.0042 | 36.5452 | 40.2019 | 40.4466 | 19.0 |
| 0.0263 | 67.0 | 14137 | 0.4674 | 40.9152 | 36.4575 | 40.2074 | 40.4128 | 19.0 |
| 0.0263 | 68.0 | 14348 | 0.4638 | 40.9942 | 36.4242 | 40.2192 | 40.4164 | 19.0 |
| 0.0258 | 69.0 | 14559 | 0.4652 | 41.0026 | 36.3871 | 40.1336 | 40.3569 | 19.0 |
| 0.0258 | 70.0 | 14770 | 0.4683 | 40.9275 | 36.4236 | 40.0798 | 40.3247 | 19.0 |
| 0.0258 | 71.0 | 14981 | 0.4729 | 40.9299 | 36.2989 | 40.1179 | 40.3533 | 19.0 |
| 0.0245 | 72.0 | 15192 | 0.4713 | 40.8745 | 36.2617 | 40.0829 | 40.3073 | 19.0 |
| 0.0245 | 73.0 | 15403 | 0.4720 | 40.9534 | 36.4602 | 40.1804 | 40.4279 | 19.0 |
| 0.0231 | 74.0 | 15614 | 0.4762 | 41.055 | 36.552 | 40.2672 | 40.5027 | 19.0 |
| 0.0231 | 75.0 | 15825 | 0.4776 | 40.939 | 36.492 | 40.1735 | 40.3718 | 19.0 |
| 0.0219 | 76.0 | 16036 | 0.4814 | 41.0543 | 36.6498 | 40.3146 | 40.5381 | 19.0 |
| 0.0219 | 77.0 | 16247 | 0.4826 | 41.0015 | 36.5925 | 40.2389 | 40.4813 | 19.0 |
| 0.0219 | 78.0 | 16458 | 0.4840 | 41.0486 | 36.6352 | 40.3106 | 40.5603 | 19.0 |
| 0.0213 | 79.0 | 16669 | 0.4848 | 40.9784 | 36.4886 | 40.1903 | 40.439 | 19.0 |
| 0.0213 | 80.0 | 16880 | 0.4910 | 41.175 | 36.6854 | 40.3474 | 40.5917 | 19.0 |
| 0.0204 | 81.0 | 17091 | 0.4843 | 41.0851 | 36.5354 | 40.3005 | 40.5392 | 19.0 |
| 0.0204 | 82.0 | 17302 | 0.4847 | 41.2714 | 36.6856 | 40.4516 | 40.672 | 19.0 |
| 0.0196 | 83.0 | 17513 | 0.4860 | 40.9692 | 36.3916 | 40.1273 | 40.3602 | 19.0 |
| 0.0196 | 84.0 | 17724 | 0.4870 | 40.9497 | 36.3933 | 40.1057 | 40.3926 | 19.0 |
| 0.0196 | 85.0 | 17935 | 0.4827 | 41.0823 | 36.5005 | 40.2376 | 40.4651 | 19.0 |
| 0.019 | 86.0 | 18146 | 0.4889 | 41.1902 | 36.6614 | 40.3848 | 40.6069 | 19.0 |
| 0.019 | 87.0 | 18357 | 0.4890 | 41.186 | 36.6136 | 40.4576 | 40.6462 | 19.0 |
| 0.0179 | 88.0 | 18568 | 0.4940 | 41.1593 | 36.5153 | 40.377 | 40.5727 | 19.0 |
| 0.0179 | 89.0 | 18779 | 0.4908 | 40.9712 | 36.43 | 40.1811 | 40.3797 | 19.0 |
| 0.0179 | 90.0 | 18990 | 0.4914 | 41.0358 | 36.4656 | 40.1936 | 40.4449 | 19.0 |
| 0.0176 | 91.0 | 19201 | 0.4924 | 40.8918 | 36.3329 | 40.0398 | 40.2895 | 19.0 |
| 0.0176 | 92.0 | 19412 | 0.4913 | 41.0889 | 36.3829 | 40.213 | 40.4163 | 19.0 |
| 0.0168 | 93.0 | 19623 | 0.4939 | 41.048 | 36.407 | 40.1863 | 40.4131 | 19.0 |
| 0.0168 | 94.0 | 19834 | 0.4996 | 41.0211 | 36.3687 | 40.1492 | 40.3375 | 19.0 |
| 0.016 | 95.0 | 20045 | 0.5000 | 40.8562 | 36.2496 | 39.9959 | 40.2259 | 19.0 |
| 0.016 | 96.0 | 20256 | 0.4989 | 41.0123 | 36.3468 | 40.1217 | 40.3407 | 19.0 |
| 0.016 | 97.0 | 20467 | 0.5004 | 41.0992 | 36.4577 | 40.1794 | 40.4175 | 19.0 |
| 0.0163 | 98.0 | 20678 | 0.5009 | 41.0319 | 36.3625 | 40.1331 | 40.3442 | 19.0 |
| 0.0163 | 99.0 | 20889 | 0.4978 | 40.8888 | 36.238 | 40.0311 | 40.2348 | 19.0 |
| 0.0154 | 100.0 | 21100 | 0.5059 | 40.9034 | 36.2802 | 40.033 | 40.2534 | 19.0 |
| 0.0154 | 101.0 | 21311 | 0.5026 | 41.0808 | 36.4192 | 40.211 | 40.4242 | 19.0 |
| 0.0148 | 102.0 | 21522 | 0.5043 | 41.1898 | 36.4732 | 40.3336 | 40.5495 | 19.0 |
| 0.0148 | 103.0 | 21733 | 0.5062 | 41.216 | 36.6109 | 40.408 | 40.6201 | 19.0 |
| 0.0148 | 104.0 | 21944 | 0.5076 | 40.9136 | 36.2326 | 40.043 | 40.274 | 19.0 |
| 0.0142 | 105.0 | 22155 | 0.5085 | 41.1476 | 36.5099 | 40.3444 | 40.5131 | 19.0 |
| 0.0142 | 106.0 | 22366 | 0.5087 | 41.1 | 36.4271 | 40.2888 | 40.4809 | 19.0 |
| 0.0137 | 107.0 | 22577 | 0.5083 | 40.8868 | 36.2128 | 40.0356 | 40.2519 | 19.0 |
| 0.0137 | 108.0 | 22788 | 0.5097 | 41.0436 | 36.4065 | 40.2004 | 40.4431 | 19.0 |
| 0.0137 | 109.0 | 22999 | 0.5113 | 41.1789 | 36.617 | 40.3938 | 40.5925 | 19.0 |
| 0.0137 | 110.0 | 23210 | 0.5127 | 40.989 | 36.3659 | 40.1097 | 40.3074 | 19.0 |
| 0.0137 | 111.0 | 23421 | 0.5144 | 41.0157 | 36.3607 | 40.1239 | 40.3237 | 19.0 |
| 0.0132 | 112.0 | 23632 | 0.5153 | 40.9412 | 36.3165 | 40.0601 | 40.283 | 19.0 |
| 0.0132 | 113.0 | 23843 | 0.5127 | 41.011 | 36.3343 | 40.1059 | 40.3317 | 19.0 |
| 0.0138 | 114.0 | 24054 | 0.5174 | 40.9507 | 36.3226 | 40.0426 | 40.2821 | 19.0 |
| 0.0138 | 115.0 | 24265 | 0.5172 | 40.9169 | 36.2471 | 40.0189 | 40.2581 | 19.0 |
| 0.0138 | 116.0 | 24476 | 0.5191 | 40.9621 | 36.2937 | 40.0859 | 40.2872 | 19.0 |
| 0.0129 | 117.0 | 24687 | 0.5164 | 40.9124 | 36.2428 | 40.0247 | 40.2636 | 19.0 |
| 0.0129 | 118.0 | 24898 | 0.5217 | 40.8482 | 36.2412 | 39.983 | 40.2084 | 19.0 |
| 0.0131 | 119.0 | 25109 | 0.5191 | 40.9377 | 36.3549 | 40.0702 | 40.303 | 19.0 |
| 0.0131 | 120.0 | 25320 | 0.5206 | 41.0878 | 36.5262 | 40.2577 | 40.4903 | 19.0 |
| 0.0123 | 121.0 | 25531 | 0.5223 | 40.9777 | 36.4348 | 40.1438 | 40.3255 | 19.0 |
| 0.0123 | 122.0 | 25742 | 0.5200 | 40.9512 | 36.2822 | 40.0795 | 40.2998 | 19.0 |
| 0.0123 | 123.0 | 25953 | 0.5244 | 40.9508 | 36.3301 | 40.0726 | 40.3256 | 19.0 |
| 0.0125 | 124.0 | 26164 | 0.5225 | 41.1733 | 36.4561 | 40.3336 | 40.5512 | 19.0 |
| 0.0125 | 125.0 | 26375 | 0.5240 | 41.0364 | 36.4154 | 40.189 | 40.4268 | 19.0 |
| 0.0118 | 126.0 | 26586 | 0.5246 | 41.1267 | 36.4904 | 40.3025 | 40.5672 | 19.0 |
| 0.0118 | 127.0 | 26797 | 0.5214 | 40.9609 | 36.417 | 40.1255 | 40.3472 | 19.0 |
| 0.0125 | 128.0 | 27008 | 0.5196 | 41.1335 | 36.4937 | 40.3248 | 40.5371 | 19.0 |
| 0.0125 | 129.0 | 27219 | 0.5214 | 41.1757 | 36.606 | 40.3908 | 40.6112 | 19.0 |
| 0.0125 | 130.0 | 27430 | 0.5190 | 41.1436 | 36.5116 | 40.344 | 40.5505 | 19.0 |
| 0.012 | 131.0 | 27641 | 0.5227 | 41.0854 | 36.5638 | 40.2975 | 40.5342 | 19.0 |
| 0.012 | 132.0 | 27852 | 0.5233 | 41.0652 | 36.5087 | 40.2447 | 40.4784 | 19.0 |
| 0.0117 | 133.0 | 28063 | 0.5251 | 41.1272 | 36.4621 | 40.2664 | 40.4917 | 19.0 |
| 0.0117 | 134.0 | 28274 | 0.5215 | 41.1819 | 36.5561 | 40.3583 | 40.5515 | 19.0 |
| 0.0117 | 135.0 | 28485 | 0.5219 | 41.1615 | 36.5308 | 40.323 | 40.5283 | 19.0 |
| 0.0116 | 136.0 | 28696 | 0.5228 | 41.0947 | 36.4701 | 40.2537 | 40.4725 | 19.0 |
| 0.0116 | 137.0 | 28907 | 0.5211 | 41.1187 | 36.4948 | 40.2711 | 40.4957 | 19.0 |
| 0.0114 | 138.0 | 29118 | 0.5219 | 41.0826 | 36.4684 | 40.2557 | 40.4678 | 19.0 |
| 0.0114 | 139.0 | 29329 | 0.5223 | 41.1453 | 36.5356 | 40.3132 | 40.5333 | 19.0 |
| 0.0111 | 140.0 | 29540 | 0.5237 | 41.1055 | 36.4938 | 40.2656 | 40.4907 | 19.0 |
| 0.0111 | 141.0 | 29751 | 0.5241 | 41.1391 | 36.4983 | 40.2896 | 40.5215 | 19.0 |
| 0.0111 | 142.0 | 29962 | 0.5243 | 41.1702 | 36.5621 | 40.3401 | 40.5579 | 19.0 |
| 0.0112 | 143.0 | 30173 | 0.5242 | 41.1499 | 36.5609 | 40.3355 | 40.5387 | 19.0 |
| 0.0112 | 144.0 | 30384 | 0.5236 | 41.1261 | 36.5274 | 40.3011 | 40.522 | 19.0 |
| 0.011 | 145.0 | 30595 | 0.5240 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.011 | 146.0 | 30806 | 0.5248 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 147.0 | 31017 | 0.5241 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 148.0 | 31228 | 0.5243 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0106 | 149.0 | 31439 | 0.5245 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
| 0.0105 | 150.0 | 31650 | 0.5246 | 41.1174 | 36.4917 | 40.2739 | 40.5043 | 19.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
rendchevi/roberta-per-v0.1
|
rendchevi
| 2024-03-07T18:12:51Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T17:45:17Z |
---
library_name: transformers
tags: []
---
```py
def scaling(x, min_x, max_x, r1, r2):
# Scale data x (n_samples x 1) to [r1, r2]
x_s = x
x_s = (x_s - min_x) * (r2 - r1) / (max_x - min_x)
x_s = r1 + x_s
return x_s
def descaling(x_s, min_x, max_x, r1, r2):
# Re-scale data x (n_samples x 1) to [min_x, max_x]
x = x_s
x = (x - r1) * (max_x - min_x) / (r2 - r1) + min_x
return x
# Inference example
with torch.no_grad():
x = "They are equally important, absolutely, and just as real as each other."
x = tokenizer([x], return_tensors="pt", add_special_tokens=True, padding=True)
y_hat = model(**x.to(device)).logits
y_hat = torch.tanh(y_hat).cpu()
l_hat = descaling(y_hat, 1, 7, -1, 1)[0].numpy()
print(l_hat)
# [C, O, E, A, S]
# [6.0583944 4.4941516 1.6538751 5.5261126 4.725995 ]
```
|
JayShah07/pii_model
|
JayShah07
| 2024-03-07T18:04:46Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-07T16:51:34Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: pii_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pii_model
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0009
- Precision: 0.7387
- Recall: 0.7736
- F1: 0.7558
- Accuracy: 0.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 192 | 0.0023 | 0.0 | 0.0 | 0.0 | 0.9993 |
| No log | 2.0 | 384 | 0.0012 | 0.75 | 0.7358 | 0.7429 | 0.9998 |
| 0.036 | 3.0 | 576 | 0.0009 | 0.7009 | 0.7736 | 0.7354 | 0.9998 |
| 0.036 | 4.0 | 768 | 0.0008 | 0.7345 | 0.7830 | 0.7580 | 0.9998 |
| 0.036 | 5.0 | 960 | 0.0009 | 0.7387 | 0.7736 | 0.7558 | 0.9998 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2
|
presencesw/xlmr_large_vinli_4_label_checkpoint-285
|
presencesw
| 2024-03-07T17:52:39Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T17:51:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
andersonbcdefg/distilbert-splade-onnx
|
andersonbcdefg
| 2024-03-07T17:50:21Z | 6 | 1 |
transformers
|
[
"transformers",
"onnx",
"safetensors",
"distilbert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-06T21:13:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mantis-VL/mfuyu_llava_v3_8192_480p
|
Mantis-VL
| 2024-03-07T17:48:49Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"fuyu",
"text-generation",
"generated_from_trainer",
"base_model:Mantis-VL/mfuyu_llava_8192_480p",
"base_model:finetune:Mantis-VL/mfuyu_llava_8192_480p",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T16:43:18Z |
---
license: cc-by-nc-4.0
base_model: MFuyu/mfuyu_llava_8192_480p
tags:
- generated_from_trainer
model-index:
- name: mfuyu_llava_v3_8192_480p
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mfuyu_llava_v3_8192_480p
This model is a fine-tuned version of [MFuyu/mfuyu_llava_8192_480p](https://huggingface.co/MFuyu/mfuyu_llava_8192_480p) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
SamuelBabua/StoryTellerV1
|
SamuelBabua
| 2024-03-07T17:45:32Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-07T17:07:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
guilhermebastos96/speecht5_finetuned_male_globo_add_token_2
|
guilhermebastos96
| 2024-03-07T17:42:09Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2024-03-07T05:55:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_male_globo_add_token_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_male_globo_add_token_2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3445
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.3989 | 6.48 | 1000 | 0.3666 |
| 0.3899 | 12.97 | 2000 | 0.3541 |
| 0.3787 | 19.45 | 3000 | 0.3500 |
| 0.3756 | 25.93 | 4000 | 0.3494 |
| 0.3739 | 32.41 | 5000 | 0.3463 |
| 0.3728 | 38.9 | 6000 | 0.3448 |
| 0.3686 | 45.38 | 7000 | 0.3447 |
| 0.3687 | 51.86 | 8000 | 0.3446 |
| 0.3682 | 58.35 | 9000 | 0.3465 |
| 0.3683 | 64.83 | 10000 | 0.3445 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Ftfyhh/xttsv2_banana
|
Ftfyhh
| 2024-03-07T17:42:00Z | 0 | 17 | null |
[
"region:us"
] | null | 2024-03-04T12:07:56Z |
# XTTSv2 Banana finetune - Russian informal speech
Разговорный файнтюн XTTSv2 для русского языка. Основан на 9 минутах голосовых сообщениях с матом от 5 разных девушек.
Видео сравнение с оригиналом: https://www.youtube.com/watch?v=hPS7dtJn00s
## Особенности
- добавляет больше интонаций, эмоциональности, придыханий, делая речь более живой.
- лучше справляется с ударениями в словах (мат, разговорная лексика).
- только для русского языка. В английском на коротких фразах типа Yes./No./Well. появились звуковые галлюцинации, на длинных почти не заметно. На русском все ок.
- основан на женских голосах, поэтому все мужские голоса будут слегка феминными.
- весит 5 GB, но VRAM занимает ровно столько же, сколько и оригинал (2.6 GB).
- обучение на 9 минутах голосовых сообщений заняло 70 минут и 10 эпох на 3060 12 GB, дальше была только потеря качества (loss). Чем больше датасет, тем больше требуется VRAM и времени.
- для дальнейшего улучшения качества ударений требуется еще больший датасет с проблемными словами и ручная проверка распознанного Виспером текста.
## Использование
- у вас должен быть установлен [Couqi TTS](https://github.com/coqui-ai/TTS/tree/dev#installation) либо [xtts_api_server](https://github.com/daswer123/xtts-api-server?tab=readme-ov-file#installation)
- скачать все файлы сохраняя структуру папок (/model_banana/v2.0.2/...)
- для xtts_api_server: в папке на одну выше, чем /model_banana запустить cmd: python -m xtts_api_server -d=cuda -mf model_banana
- Инструкция как дообучить xtts для своего голоса: https://docs.coqui.ai/en/latest/models/xtts.html#training (нужно 16-20 GB VRAM, но shared vram тоже подойдет, просто будет чуть медленнее)
Мой русский неформальный голосовой помощник: https://github.com/Mozer/talk-llama-fast
ТГ: https://t.me/tensorbanana
|
not-lain/DUSt3R_ViTLarge_BaseDecoder_512_dpt_bis
|
not-lain
| 2024-03-07T17:32:40Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T22:18:56Z |
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using ****:
- Repo: [More Information Needed]
- Docs: [More Information Needed]
|
artaxx194/FemaleWerewolf
|
artaxx194
| 2024-03-07T17:30:06Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"region:us"
] |
text-to-image
| 2024-03-07T17:29:44Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/example1 - Copy.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
---
# Female Werewolf
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/artaxx194/FemaleWerewolf/tree/main) them in the Files & versions tab.
|
farid1088/GQA_BERT_German_legal_SQuAD_17
|
farid1088
| 2024-03-07T17:25:01Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-05T13:41:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_German_legal_SQuAD_17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_German_legal_SQuAD_17
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 17
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 5.4404 |
| No log | 2.0 | 4 | 4.4407 |
| No log | 3.0 | 6 | 3.9783 |
| No log | 4.0 | 8 | 3.6009 |
| No log | 5.0 | 10 | 3.2873 |
| No log | 6.0 | 12 | 3.0050 |
| No log | 7.0 | 14 | 2.7571 |
| No log | 8.0 | 16 | 2.5398 |
| No log | 9.0 | 18 | 2.3554 |
| No log | 10.0 | 20 | 2.2110 |
| No log | 11.0 | 22 | 2.0977 |
| No log | 12.0 | 24 | 2.0078 |
| No log | 13.0 | 26 | 1.9261 |
| No log | 14.0 | 28 | 1.8590 |
| No log | 15.0 | 30 | 1.8072 |
| No log | 16.0 | 32 | 1.7733 |
| No log | 17.0 | 34 | 1.7586 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
farid1088/GQA_BERT_German_legal_SQuAD_part_augmented_2000
|
farid1088
| 2024-03-07T17:23:27Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-07T14:46:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_BERT_German_legal_SQuAD_part_augmented_2000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_BERT_German_legal_SQuAD_part_augmented_2000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 160
- eval_batch_size: 40
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.1193 |
| No log | 2.0 | 6 | 4.5794 |
| No log | 3.0 | 9 | 3.9562 |
| No log | 4.0 | 12 | 3.6226 |
| No log | 5.0 | 15 | 3.1767 |
| No log | 6.0 | 18 | 2.8026 |
| No log | 7.0 | 21 | 2.5106 |
| No log | 8.0 | 24 | 2.2343 |
| No log | 9.0 | 27 | 2.0290 |
| No log | 10.0 | 30 | 1.8059 |
| No log | 11.0 | 33 | 1.6448 |
| No log | 12.0 | 36 | 1.4814 |
| No log | 13.0 | 39 | 1.3270 |
| No log | 14.0 | 42 | 1.2522 |
| No log | 15.0 | 45 | 1.1957 |
| No log | 16.0 | 48 | 1.1489 |
| No log | 17.0 | 51 | 1.1251 |
| No log | 18.0 | 54 | 1.1000 |
| No log | 19.0 | 57 | 1.0762 |
| No log | 20.0 | 60 | 1.0465 |
| No log | 21.0 | 63 | 1.0398 |
| No log | 22.0 | 66 | 1.0363 |
| No log | 23.0 | 69 | 1.0388 |
| No log | 24.0 | 72 | 1.0330 |
| No log | 25.0 | 75 | 1.0242 |
| No log | 26.0 | 78 | 1.0188 |
| No log | 27.0 | 81 | 1.0227 |
| No log | 28.0 | 84 | 1.0281 |
| No log | 29.0 | 87 | 1.0362 |
| No log | 30.0 | 90 | 1.0278 |
| No log | 31.0 | 93 | 1.0463 |
| No log | 32.0 | 96 | 1.0733 |
| No log | 33.0 | 99 | 1.0895 |
| No log | 34.0 | 102 | 1.0818 |
| No log | 35.0 | 105 | 1.0836 |
| No log | 36.0 | 108 | 1.0664 |
| No log | 37.0 | 111 | 1.0578 |
| No log | 38.0 | 114 | 1.0792 |
| No log | 39.0 | 117 | 1.0465 |
| No log | 40.0 | 120 | 1.0288 |
| No log | 41.0 | 123 | 1.0609 |
| No log | 42.0 | 126 | 1.0676 |
| No log | 43.0 | 129 | 1.0343 |
| No log | 44.0 | 132 | 1.0653 |
| No log | 45.0 | 135 | 1.1017 |
| No log | 46.0 | 138 | 1.0780 |
| No log | 47.0 | 141 | 1.0841 |
| No log | 48.0 | 144 | 1.0921 |
| No log | 49.0 | 147 | 1.0919 |
| No log | 50.0 | 150 | 1.1088 |
| No log | 51.0 | 153 | 1.0983 |
| No log | 52.0 | 156 | 1.0897 |
| No log | 53.0 | 159 | 1.0991 |
| No log | 54.0 | 162 | 1.1124 |
| No log | 55.0 | 165 | 1.0800 |
| No log | 56.0 | 168 | 1.1173 |
| No log | 57.0 | 171 | 1.1244 |
| No log | 58.0 | 174 | 1.1127 |
| No log | 59.0 | 177 | 1.1290 |
| No log | 60.0 | 180 | 1.1127 |
| No log | 61.0 | 183 | 1.1141 |
| No log | 62.0 | 186 | 1.1494 |
| No log | 63.0 | 189 | 1.1185 |
| No log | 64.0 | 192 | 1.1394 |
| No log | 65.0 | 195 | 1.1624 |
| No log | 66.0 | 198 | 1.1620 |
| No log | 67.0 | 201 | 1.1518 |
| No log | 68.0 | 204 | 1.1353 |
| No log | 69.0 | 207 | 1.2165 |
| No log | 70.0 | 210 | 1.1765 |
| No log | 71.0 | 213 | 1.1964 |
| No log | 72.0 | 216 | 1.2078 |
| No log | 73.0 | 219 | 1.1245 |
| No log | 74.0 | 222 | 1.1631 |
| No log | 75.0 | 225 | 1.1314 |
| No log | 76.0 | 228 | 1.0521 |
| No log | 77.0 | 231 | 1.1047 |
| No log | 78.0 | 234 | 1.1412 |
| No log | 79.0 | 237 | 1.1133 |
| No log | 80.0 | 240 | 1.1257 |
| No log | 81.0 | 243 | 1.1375 |
| No log | 82.0 | 246 | 1.0486 |
| No log | 83.0 | 249 | 1.1223 |
| No log | 84.0 | 252 | 1.1664 |
| No log | 85.0 | 255 | 1.0748 |
| No log | 86.0 | 258 | 1.1151 |
| No log | 87.0 | 261 | 1.1358 |
| No log | 88.0 | 264 | 1.0981 |
| No log | 89.0 | 267 | 1.2120 |
| No log | 90.0 | 270 | 1.1805 |
| No log | 91.0 | 273 | 1.1296 |
| No log | 92.0 | 276 | 1.3029 |
| No log | 93.0 | 279 | 1.2570 |
| No log | 94.0 | 282 | 1.1256 |
| No log | 95.0 | 285 | 1.1910 |
| No log | 96.0 | 288 | 1.2814 |
| No log | 97.0 | 291 | 1.1195 |
| No log | 98.0 | 294 | 1.0572 |
| No log | 99.0 | 297 | 1.1948 |
| No log | 100.0 | 300 | 1.1649 |
| No log | 101.0 | 303 | 1.0716 |
| No log | 102.0 | 306 | 1.1648 |
| No log | 103.0 | 309 | 1.1558 |
| No log | 104.0 | 312 | 1.1381 |
| No log | 105.0 | 315 | 1.2201 |
| No log | 106.0 | 318 | 1.2335 |
| No log | 107.0 | 321 | 1.0798 |
| No log | 108.0 | 324 | 1.1202 |
| No log | 109.0 | 327 | 1.2209 |
| No log | 110.0 | 330 | 1.2331 |
| No log | 111.0 | 333 | 1.1878 |
| No log | 112.0 | 336 | 1.2108 |
| No log | 113.0 | 339 | 1.2244 |
| No log | 114.0 | 342 | 1.1712 |
| No log | 115.0 | 345 | 1.1699 |
| No log | 116.0 | 348 | 1.2039 |
| No log | 117.0 | 351 | 1.0968 |
| No log | 118.0 | 354 | 1.1880 |
| No log | 119.0 | 357 | 1.1514 |
| No log | 120.0 | 360 | 1.0878 |
| No log | 121.0 | 363 | 1.1416 |
| No log | 122.0 | 366 | 1.1696 |
| No log | 123.0 | 369 | 1.1387 |
| No log | 124.0 | 372 | 1.1488 |
| No log | 125.0 | 375 | 1.1840 |
| No log | 126.0 | 378 | 1.1501 |
| No log | 127.0 | 381 | 1.1900 |
| No log | 128.0 | 384 | 1.1478 |
| No log | 129.0 | 387 | 1.2309 |
| No log | 130.0 | 390 | 1.3350 |
| No log | 131.0 | 393 | 1.2147 |
| No log | 132.0 | 396 | 1.1993 |
| No log | 133.0 | 399 | 1.2747 |
| No log | 134.0 | 402 | 1.2372 |
| No log | 135.0 | 405 | 1.2479 |
| No log | 136.0 | 408 | 1.2942 |
| No log | 137.0 | 411 | 1.2322 |
| No log | 138.0 | 414 | 1.2148 |
| No log | 139.0 | 417 | 1.2922 |
| No log | 140.0 | 420 | 1.3430 |
| No log | 141.0 | 423 | 1.3824 |
| No log | 142.0 | 426 | 1.2082 |
| No log | 143.0 | 429 | 1.1967 |
| No log | 144.0 | 432 | 1.2483 |
| No log | 145.0 | 435 | 1.1599 |
| No log | 146.0 | 438 | 1.0864 |
| No log | 147.0 | 441 | 1.1238 |
| No log | 148.0 | 444 | 1.2074 |
| No log | 149.0 | 447 | 1.1902 |
| No log | 150.0 | 450 | 1.1397 |
| No log | 151.0 | 453 | 1.1546 |
| No log | 152.0 | 456 | 1.2126 |
| No log | 153.0 | 459 | 1.2443 |
| No log | 154.0 | 462 | 1.2378 |
| No log | 155.0 | 465 | 1.2335 |
| No log | 156.0 | 468 | 1.1798 |
| No log | 157.0 | 471 | 1.1297 |
| No log | 158.0 | 474 | 1.1737 |
| No log | 159.0 | 477 | 1.0970 |
| No log | 160.0 | 480 | 1.1708 |
| No log | 161.0 | 483 | 1.1551 |
| No log | 162.0 | 486 | 1.1848 |
| No log | 163.0 | 489 | 1.1971 |
| No log | 164.0 | 492 | 1.1720 |
| No log | 165.0 | 495 | 1.1960 |
| No log | 166.0 | 498 | 1.2754 |
| 1.0047 | 167.0 | 501 | 1.2083 |
| 1.0047 | 168.0 | 504 | 1.0888 |
| 1.0047 | 169.0 | 507 | 1.2684 |
| 1.0047 | 170.0 | 510 | 1.3395 |
| 1.0047 | 171.0 | 513 | 1.2508 |
| 1.0047 | 172.0 | 516 | 1.1460 |
| 1.0047 | 173.0 | 519 | 1.2464 |
| 1.0047 | 174.0 | 522 | 1.2131 |
| 1.0047 | 175.0 | 525 | 1.1181 |
| 1.0047 | 176.0 | 528 | 1.2012 |
| 1.0047 | 177.0 | 531 | 1.2957 |
| 1.0047 | 178.0 | 534 | 1.1890 |
| 1.0047 | 179.0 | 537 | 1.1628 |
| 1.0047 | 180.0 | 540 | 1.1929 |
| 1.0047 | 181.0 | 543 | 1.2900 |
| 1.0047 | 182.0 | 546 | 1.3240 |
| 1.0047 | 183.0 | 549 | 1.2145 |
| 1.0047 | 184.0 | 552 | 1.2942 |
| 1.0047 | 185.0 | 555 | 1.3425 |
| 1.0047 | 186.0 | 558 | 1.1772 |
| 1.0047 | 187.0 | 561 | 1.2255 |
| 1.0047 | 188.0 | 564 | 1.4528 |
| 1.0047 | 189.0 | 567 | 1.3898 |
| 1.0047 | 190.0 | 570 | 1.1862 |
| 1.0047 | 191.0 | 573 | 1.1700 |
| 1.0047 | 192.0 | 576 | 1.2801 |
| 1.0047 | 193.0 | 579 | 1.2571 |
| 1.0047 | 194.0 | 582 | 1.1962 |
| 1.0047 | 195.0 | 585 | 1.2228 |
| 1.0047 | 196.0 | 588 | 1.2153 |
| 1.0047 | 197.0 | 591 | 1.1498 |
| 1.0047 | 198.0 | 594 | 1.1130 |
| 1.0047 | 199.0 | 597 | 1.1537 |
| 1.0047 | 200.0 | 600 | 1.2239 |
| 1.0047 | 201.0 | 603 | 1.1742 |
| 1.0047 | 202.0 | 606 | 1.1292 |
| 1.0047 | 203.0 | 609 | 1.1688 |
| 1.0047 | 204.0 | 612 | 1.1844 |
| 1.0047 | 205.0 | 615 | 1.1928 |
| 1.0047 | 206.0 | 618 | 1.2253 |
| 1.0047 | 207.0 | 621 | 1.2585 |
| 1.0047 | 208.0 | 624 | 1.3174 |
| 1.0047 | 209.0 | 627 | 1.3660 |
| 1.0047 | 210.0 | 630 | 1.2523 |
| 1.0047 | 211.0 | 633 | 1.2249 |
| 1.0047 | 212.0 | 636 | 1.4178 |
| 1.0047 | 213.0 | 639 | 1.3895 |
| 1.0047 | 214.0 | 642 | 1.2523 |
| 1.0047 | 215.0 | 645 | 1.1921 |
| 1.0047 | 216.0 | 648 | 1.2245 |
| 1.0047 | 217.0 | 651 | 1.3426 |
| 1.0047 | 218.0 | 654 | 1.3673 |
| 1.0047 | 219.0 | 657 | 1.1933 |
| 1.0047 | 220.0 | 660 | 1.1469 |
| 1.0047 | 221.0 | 663 | 1.2684 |
| 1.0047 | 222.0 | 666 | 1.4222 |
| 1.0047 | 223.0 | 669 | 1.4067 |
| 1.0047 | 224.0 | 672 | 1.3425 |
| 1.0047 | 225.0 | 675 | 1.3358 |
| 1.0047 | 226.0 | 678 | 1.4246 |
| 1.0047 | 227.0 | 681 | 1.3301 |
| 1.0047 | 228.0 | 684 | 1.1915 |
| 1.0047 | 229.0 | 687 | 1.2654 |
| 1.0047 | 230.0 | 690 | 1.4043 |
| 1.0047 | 231.0 | 693 | 1.3357 |
| 1.0047 | 232.0 | 696 | 1.2512 |
| 1.0047 | 233.0 | 699 | 1.2383 |
| 1.0047 | 234.0 | 702 | 1.1516 |
| 1.0047 | 235.0 | 705 | 1.1382 |
| 1.0047 | 236.0 | 708 | 1.2749 |
| 1.0047 | 237.0 | 711 | 1.3747 |
| 1.0047 | 238.0 | 714 | 1.1791 |
| 1.0047 | 239.0 | 717 | 1.1527 |
| 1.0047 | 240.0 | 720 | 1.2194 |
| 1.0047 | 241.0 | 723 | 1.2754 |
| 1.0047 | 242.0 | 726 | 1.3448 |
| 1.0047 | 243.0 | 729 | 1.3382 |
| 1.0047 | 244.0 | 732 | 1.2932 |
| 1.0047 | 245.0 | 735 | 1.3135 |
| 1.0047 | 246.0 | 738 | 1.3671 |
| 1.0047 | 247.0 | 741 | 1.3735 |
| 1.0047 | 248.0 | 744 | 1.4142 |
| 1.0047 | 249.0 | 747 | 1.4000 |
| 1.0047 | 250.0 | 750 | 1.2954 |
| 1.0047 | 251.0 | 753 | 1.2629 |
| 1.0047 | 252.0 | 756 | 1.2982 |
| 1.0047 | 253.0 | 759 | 1.2750 |
| 1.0047 | 254.0 | 762 | 1.2273 |
| 1.0047 | 255.0 | 765 | 1.2209 |
| 1.0047 | 256.0 | 768 | 1.2359 |
| 1.0047 | 257.0 | 771 | 1.2626 |
| 1.0047 | 258.0 | 774 | 1.1799 |
| 1.0047 | 259.0 | 777 | 1.1506 |
| 1.0047 | 260.0 | 780 | 1.1846 |
| 1.0047 | 261.0 | 783 | 1.2278 |
| 1.0047 | 262.0 | 786 | 1.2040 |
| 1.0047 | 263.0 | 789 | 1.1920 |
| 1.0047 | 264.0 | 792 | 1.1921 |
| 1.0047 | 265.0 | 795 | 1.2421 |
| 1.0047 | 266.0 | 798 | 1.2557 |
| 1.0047 | 267.0 | 801 | 1.2245 |
| 1.0047 | 268.0 | 804 | 1.2240 |
| 1.0047 | 269.0 | 807 | 1.3193 |
| 1.0047 | 270.0 | 810 | 1.3523 |
| 1.0047 | 271.0 | 813 | 1.3143 |
| 1.0047 | 272.0 | 816 | 1.2657 |
| 1.0047 | 273.0 | 819 | 1.3099 |
| 1.0047 | 274.0 | 822 | 1.2485 |
| 1.0047 | 275.0 | 825 | 1.1617 |
| 1.0047 | 276.0 | 828 | 1.2186 |
| 1.0047 | 277.0 | 831 | 1.2683 |
| 1.0047 | 278.0 | 834 | 1.2432 |
| 1.0047 | 279.0 | 837 | 1.3252 |
| 1.0047 | 280.0 | 840 | 1.4173 |
| 1.0047 | 281.0 | 843 | 1.3807 |
| 1.0047 | 282.0 | 846 | 1.3895 |
| 1.0047 | 283.0 | 849 | 1.3531 |
| 1.0047 | 284.0 | 852 | 1.2847 |
| 1.0047 | 285.0 | 855 | 1.2734 |
| 1.0047 | 286.0 | 858 | 1.2917 |
| 1.0047 | 287.0 | 861 | 1.3048 |
| 1.0047 | 288.0 | 864 | 1.3169 |
| 1.0047 | 289.0 | 867 | 1.3620 |
| 1.0047 | 290.0 | 870 | 1.4486 |
| 1.0047 | 291.0 | 873 | 1.3860 |
| 1.0047 | 292.0 | 876 | 1.3026 |
| 1.0047 | 293.0 | 879 | 1.2993 |
| 1.0047 | 294.0 | 882 | 1.2825 |
| 1.0047 | 295.0 | 885 | 1.2764 |
| 1.0047 | 296.0 | 888 | 1.3134 |
| 1.0047 | 297.0 | 891 | 1.3452 |
| 1.0047 | 298.0 | 894 | 1.3714 |
| 1.0047 | 299.0 | 897 | 1.3125 |
| 1.0047 | 300.0 | 900 | 1.2099 |
| 1.0047 | 301.0 | 903 | 1.2298 |
| 1.0047 | 302.0 | 906 | 1.3122 |
| 1.0047 | 303.0 | 909 | 1.3047 |
| 1.0047 | 304.0 | 912 | 1.2591 |
| 1.0047 | 305.0 | 915 | 1.2820 |
| 1.0047 | 306.0 | 918 | 1.2770 |
| 1.0047 | 307.0 | 921 | 1.2783 |
| 1.0047 | 308.0 | 924 | 1.3475 |
| 1.0047 | 309.0 | 927 | 1.3819 |
| 1.0047 | 310.0 | 930 | 1.2759 |
| 1.0047 | 311.0 | 933 | 1.1658 |
| 1.0047 | 312.0 | 936 | 1.1919 |
| 1.0047 | 313.0 | 939 | 1.3712 |
| 1.0047 | 314.0 | 942 | 1.4586 |
| 1.0047 | 315.0 | 945 | 1.4405 |
| 1.0047 | 316.0 | 948 | 1.2275 |
| 1.0047 | 317.0 | 951 | 1.2043 |
| 1.0047 | 318.0 | 954 | 1.3147 |
| 1.0047 | 319.0 | 957 | 1.4305 |
| 1.0047 | 320.0 | 960 | 1.3858 |
| 1.0047 | 321.0 | 963 | 1.2997 |
| 1.0047 | 322.0 | 966 | 1.2348 |
| 1.0047 | 323.0 | 969 | 1.2264 |
| 1.0047 | 324.0 | 972 | 1.2819 |
| 1.0047 | 325.0 | 975 | 1.3146 |
| 1.0047 | 326.0 | 978 | 1.3341 |
| 1.0047 | 327.0 | 981 | 1.3511 |
| 1.0047 | 328.0 | 984 | 1.3223 |
| 1.0047 | 329.0 | 987 | 1.3236 |
| 1.0047 | 330.0 | 990 | 1.3429 |
| 1.0047 | 331.0 | 993 | 1.2715 |
| 1.0047 | 332.0 | 996 | 1.2452 |
| 1.0047 | 333.0 | 999 | 1.2350 |
| 0.5933 | 334.0 | 1002 | 1.1789 |
| 0.5933 | 335.0 | 1005 | 1.2327 |
| 0.5933 | 336.0 | 1008 | 1.2986 |
| 0.5933 | 337.0 | 1011 | 1.2372 |
| 0.5933 | 338.0 | 1014 | 1.1142 |
| 0.5933 | 339.0 | 1017 | 1.1219 |
| 0.5933 | 340.0 | 1020 | 1.2149 |
| 0.5933 | 341.0 | 1023 | 1.3215 |
| 0.5933 | 342.0 | 1026 | 1.3930 |
| 0.5933 | 343.0 | 1029 | 1.3952 |
| 0.5933 | 344.0 | 1032 | 1.3798 |
| 0.5933 | 345.0 | 1035 | 1.3870 |
| 0.5933 | 346.0 | 1038 | 1.3835 |
| 0.5933 | 347.0 | 1041 | 1.2778 |
| 0.5933 | 348.0 | 1044 | 1.2079 |
| 0.5933 | 349.0 | 1047 | 1.2545 |
| 0.5933 | 350.0 | 1050 | 1.3546 |
| 0.5933 | 351.0 | 1053 | 1.3485 |
| 0.5933 | 352.0 | 1056 | 1.2388 |
| 0.5933 | 353.0 | 1059 | 1.1877 |
| 0.5933 | 354.0 | 1062 | 1.1707 |
| 0.5933 | 355.0 | 1065 | 1.3036 |
| 0.5933 | 356.0 | 1068 | 1.4033 |
| 0.5933 | 357.0 | 1071 | 1.3046 |
| 0.5933 | 358.0 | 1074 | 1.1871 |
| 0.5933 | 359.0 | 1077 | 1.2303 |
| 0.5933 | 360.0 | 1080 | 1.4086 |
| 0.5933 | 361.0 | 1083 | 1.3546 |
| 0.5933 | 362.0 | 1086 | 1.1697 |
| 0.5933 | 363.0 | 1089 | 1.1320 |
| 0.5933 | 364.0 | 1092 | 1.1799 |
| 0.5933 | 365.0 | 1095 | 1.2172 |
| 0.5933 | 366.0 | 1098 | 1.3199 |
| 0.5933 | 367.0 | 1101 | 1.3302 |
| 0.5933 | 368.0 | 1104 | 1.3020 |
| 0.5933 | 369.0 | 1107 | 1.2652 |
| 0.5933 | 370.0 | 1110 | 1.3420 |
| 0.5933 | 371.0 | 1113 | 1.3486 |
| 0.5933 | 372.0 | 1116 | 1.2853 |
| 0.5933 | 373.0 | 1119 | 1.2203 |
| 0.5933 | 374.0 | 1122 | 1.1671 |
| 0.5933 | 375.0 | 1125 | 1.3050 |
| 0.5933 | 376.0 | 1128 | 1.4090 |
| 0.5933 | 377.0 | 1131 | 1.3682 |
| 0.5933 | 378.0 | 1134 | 1.2919 |
| 0.5933 | 379.0 | 1137 | 1.2611 |
| 0.5933 | 380.0 | 1140 | 1.2714 |
| 0.5933 | 381.0 | 1143 | 1.3204 |
| 0.5933 | 382.0 | 1146 | 1.3206 |
| 0.5933 | 383.0 | 1149 | 1.2592 |
| 0.5933 | 384.0 | 1152 | 1.1575 |
| 0.5933 | 385.0 | 1155 | 1.1801 |
| 0.5933 | 386.0 | 1158 | 1.2966 |
| 0.5933 | 387.0 | 1161 | 1.3092 |
| 0.5933 | 388.0 | 1164 | 1.3284 |
| 0.5933 | 389.0 | 1167 | 1.3397 |
| 0.5933 | 390.0 | 1170 | 1.3137 |
| 0.5933 | 391.0 | 1173 | 1.2775 |
| 0.5933 | 392.0 | 1176 | 1.1970 |
| 0.5933 | 393.0 | 1179 | 1.1671 |
| 0.5933 | 394.0 | 1182 | 1.3037 |
| 0.5933 | 395.0 | 1185 | 1.3400 |
| 0.5933 | 396.0 | 1188 | 1.2243 |
| 0.5933 | 397.0 | 1191 | 1.2322 |
| 0.5933 | 398.0 | 1194 | 1.3279 |
| 0.5933 | 399.0 | 1197 | 1.3577 |
| 0.5933 | 400.0 | 1200 | 1.3690 |
| 0.5933 | 401.0 | 1203 | 1.3068 |
| 0.5933 | 402.0 | 1206 | 1.2011 |
| 0.5933 | 403.0 | 1209 | 1.2389 |
| 0.5933 | 404.0 | 1212 | 1.3540 |
| 0.5933 | 405.0 | 1215 | 1.3858 |
| 0.5933 | 406.0 | 1218 | 1.3326 |
| 0.5933 | 407.0 | 1221 | 1.2234 |
| 0.5933 | 408.0 | 1224 | 1.1657 |
| 0.5933 | 409.0 | 1227 | 1.1664 |
| 0.5933 | 410.0 | 1230 | 1.2766 |
| 0.5933 | 411.0 | 1233 | 1.3610 |
| 0.5933 | 412.0 | 1236 | 1.3622 |
| 0.5933 | 413.0 | 1239 | 1.3024 |
| 0.5933 | 414.0 | 1242 | 1.2516 |
| 0.5933 | 415.0 | 1245 | 1.2160 |
| 0.5933 | 416.0 | 1248 | 1.1839 |
| 0.5933 | 417.0 | 1251 | 1.1225 |
| 0.5933 | 418.0 | 1254 | 1.1113 |
| 0.5933 | 419.0 | 1257 | 1.1720 |
| 0.5933 | 420.0 | 1260 | 1.3755 |
| 0.5933 | 421.0 | 1263 | 1.3626 |
| 0.5933 | 422.0 | 1266 | 1.2200 |
| 0.5933 | 423.0 | 1269 | 1.2175 |
| 0.5933 | 424.0 | 1272 | 1.3046 |
| 0.5933 | 425.0 | 1275 | 1.3120 |
| 0.5933 | 426.0 | 1278 | 1.3499 |
| 0.5933 | 427.0 | 1281 | 1.3850 |
| 0.5933 | 428.0 | 1284 | 1.3673 |
| 0.5933 | 429.0 | 1287 | 1.3124 |
| 0.5933 | 430.0 | 1290 | 1.2314 |
| 0.5933 | 431.0 | 1293 | 1.1724 |
| 0.5933 | 432.0 | 1296 | 1.2057 |
| 0.5933 | 433.0 | 1299 | 1.3040 |
| 0.5933 | 434.0 | 1302 | 1.3551 |
| 0.5933 | 435.0 | 1305 | 1.3777 |
| 0.5933 | 436.0 | 1308 | 1.3375 |
| 0.5933 | 437.0 | 1311 | 1.2963 |
| 0.5933 | 438.0 | 1314 | 1.3388 |
| 0.5933 | 439.0 | 1317 | 1.3685 |
| 0.5933 | 440.0 | 1320 | 1.3634 |
| 0.5933 | 441.0 | 1323 | 1.3484 |
| 0.5933 | 442.0 | 1326 | 1.3536 |
| 0.5933 | 443.0 | 1329 | 1.3584 |
| 0.5933 | 444.0 | 1332 | 1.3452 |
| 0.5933 | 445.0 | 1335 | 1.3379 |
| 0.5933 | 446.0 | 1338 | 1.3434 |
| 0.5933 | 447.0 | 1341 | 1.3378 |
| 0.5933 | 448.0 | 1344 | 1.3451 |
| 0.5933 | 449.0 | 1347 | 1.3583 |
| 0.5933 | 450.0 | 1350 | 1.3498 |
| 0.5933 | 451.0 | 1353 | 1.3202 |
| 0.5933 | 452.0 | 1356 | 1.3219 |
| 0.5933 | 453.0 | 1359 | 1.3534 |
| 0.5933 | 454.0 | 1362 | 1.3738 |
| 0.5933 | 455.0 | 1365 | 1.3947 |
| 0.5933 | 456.0 | 1368 | 1.3863 |
| 0.5933 | 457.0 | 1371 | 1.3747 |
| 0.5933 | 458.0 | 1374 | 1.3685 |
| 0.5933 | 459.0 | 1377 | 1.3519 |
| 0.5933 | 460.0 | 1380 | 1.3706 |
| 0.5933 | 461.0 | 1383 | 1.3956 |
| 0.5933 | 462.0 | 1386 | 1.3628 |
| 0.5933 | 463.0 | 1389 | 1.3669 |
| 0.5933 | 464.0 | 1392 | 1.3338 |
| 0.5933 | 465.0 | 1395 | 1.3316 |
| 0.5933 | 466.0 | 1398 | 1.3641 |
| 0.5933 | 467.0 | 1401 | 1.3980 |
| 0.5933 | 468.0 | 1404 | 1.4046 |
| 0.5933 | 469.0 | 1407 | 1.3757 |
| 0.5933 | 470.0 | 1410 | 1.3437 |
| 0.5933 | 471.0 | 1413 | 1.3552 |
| 0.5933 | 472.0 | 1416 | 1.3930 |
| 0.5933 | 473.0 | 1419 | 1.3926 |
| 0.5933 | 474.0 | 1422 | 1.3316 |
| 0.5933 | 475.0 | 1425 | 1.2435 |
| 0.5933 | 476.0 | 1428 | 1.2005 |
| 0.5933 | 477.0 | 1431 | 1.2154 |
| 0.5933 | 478.0 | 1434 | 1.2495 |
| 0.5933 | 479.0 | 1437 | 1.2615 |
| 0.5933 | 480.0 | 1440 | 1.2665 |
| 0.5933 | 481.0 | 1443 | 1.2593 |
| 0.5933 | 482.0 | 1446 | 1.2442 |
| 0.5933 | 483.0 | 1449 | 1.2603 |
| 0.5933 | 484.0 | 1452 | 1.2821 |
| 0.5933 | 485.0 | 1455 | 1.2940 |
| 0.5933 | 486.0 | 1458 | 1.2904 |
| 0.5933 | 487.0 | 1461 | 1.2815 |
| 0.5933 | 488.0 | 1464 | 1.2719 |
| 0.5933 | 489.0 | 1467 | 1.2950 |
| 0.5933 | 490.0 | 1470 | 1.3589 |
| 0.5933 | 491.0 | 1473 | 1.4231 |
| 0.5933 | 492.0 | 1476 | 1.4325 |
| 0.5933 | 493.0 | 1479 | 1.3372 |
| 0.5933 | 494.0 | 1482 | 1.2722 |
| 0.5933 | 495.0 | 1485 | 1.3250 |
| 0.5933 | 496.0 | 1488 | 1.4279 |
| 0.5933 | 497.0 | 1491 | 1.4185 |
| 0.5933 | 498.0 | 1494 | 1.3254 |
| 0.5933 | 499.0 | 1497 | 1.2996 |
| 0.5698 | 500.0 | 1500 | 1.2436 |
| 0.5698 | 501.0 | 1503 | 1.2112 |
| 0.5698 | 502.0 | 1506 | 1.2390 |
| 0.5698 | 503.0 | 1509 | 1.2883 |
| 0.5698 | 504.0 | 1512 | 1.3407 |
| 0.5698 | 505.0 | 1515 | 1.3793 |
| 0.5698 | 506.0 | 1518 | 1.4309 |
| 0.5698 | 507.0 | 1521 | 1.4088 |
| 0.5698 | 508.0 | 1524 | 1.3966 |
| 0.5698 | 509.0 | 1527 | 1.4082 |
| 0.5698 | 510.0 | 1530 | 1.3814 |
| 0.5698 | 511.0 | 1533 | 1.3396 |
| 0.5698 | 512.0 | 1536 | 1.3387 |
| 0.5698 | 513.0 | 1539 | 1.3057 |
| 0.5698 | 514.0 | 1542 | 1.2687 |
| 0.5698 | 515.0 | 1545 | 1.2707 |
| 0.5698 | 516.0 | 1548 | 1.4157 |
| 0.5698 | 517.0 | 1551 | 1.4618 |
| 0.5698 | 518.0 | 1554 | 1.4597 |
| 0.5698 | 519.0 | 1557 | 1.4605 |
| 0.5698 | 520.0 | 1560 | 1.4481 |
| 0.5698 | 521.0 | 1563 | 1.4423 |
| 0.5698 | 522.0 | 1566 | 1.4312 |
| 0.5698 | 523.0 | 1569 | 1.4020 |
| 0.5698 | 524.0 | 1572 | 1.3645 |
| 0.5698 | 525.0 | 1575 | 1.3438 |
| 0.5698 | 526.0 | 1578 | 1.3205 |
| 0.5698 | 527.0 | 1581 | 1.3053 |
| 0.5698 | 528.0 | 1584 | 1.2944 |
| 0.5698 | 529.0 | 1587 | 1.3649 |
| 0.5698 | 530.0 | 1590 | 1.4252 |
| 0.5698 | 531.0 | 1593 | 1.4653 |
| 0.5698 | 532.0 | 1596 | 1.4664 |
| 0.5698 | 533.0 | 1599 | 1.4386 |
| 0.5698 | 534.0 | 1602 | 1.3703 |
| 0.5698 | 535.0 | 1605 | 1.3156 |
| 0.5698 | 536.0 | 1608 | 1.3263 |
| 0.5698 | 537.0 | 1611 | 1.3055 |
| 0.5698 | 538.0 | 1614 | 1.3066 |
| 0.5698 | 539.0 | 1617 | 1.3549 |
| 0.5698 | 540.0 | 1620 | 1.4445 |
| 0.5698 | 541.0 | 1623 | 1.4701 |
| 0.5698 | 542.0 | 1626 | 1.4265 |
| 0.5698 | 543.0 | 1629 | 1.3599 |
| 0.5698 | 544.0 | 1632 | 1.3451 |
| 0.5698 | 545.0 | 1635 | 1.3428 |
| 0.5698 | 546.0 | 1638 | 1.3231 |
| 0.5698 | 547.0 | 1641 | 1.3266 |
| 0.5698 | 548.0 | 1644 | 1.3216 |
| 0.5698 | 549.0 | 1647 | 1.2599 |
| 0.5698 | 550.0 | 1650 | 1.2338 |
| 0.5698 | 551.0 | 1653 | 1.2140 |
| 0.5698 | 552.0 | 1656 | 1.2297 |
| 0.5698 | 553.0 | 1659 | 1.2842 |
| 0.5698 | 554.0 | 1662 | 1.3357 |
| 0.5698 | 555.0 | 1665 | 1.3797 |
| 0.5698 | 556.0 | 1668 | 1.3690 |
| 0.5698 | 557.0 | 1671 | 1.3163 |
| 0.5698 | 558.0 | 1674 | 1.2510 |
| 0.5698 | 559.0 | 1677 | 1.2714 |
| 0.5698 | 560.0 | 1680 | 1.3403 |
| 0.5698 | 561.0 | 1683 | 1.4387 |
| 0.5698 | 562.0 | 1686 | 1.4697 |
| 0.5698 | 563.0 | 1689 | 1.4641 |
| 0.5698 | 564.0 | 1692 | 1.4123 |
| 0.5698 | 565.0 | 1695 | 1.3808 |
| 0.5698 | 566.0 | 1698 | 1.3325 |
| 0.5698 | 567.0 | 1701 | 1.3470 |
| 0.5698 | 568.0 | 1704 | 1.3301 |
| 0.5698 | 569.0 | 1707 | 1.3255 |
| 0.5698 | 570.0 | 1710 | 1.3614 |
| 0.5698 | 571.0 | 1713 | 1.4034 |
| 0.5698 | 572.0 | 1716 | 1.4201 |
| 0.5698 | 573.0 | 1719 | 1.4221 |
| 0.5698 | 574.0 | 1722 | 1.4100 |
| 0.5698 | 575.0 | 1725 | 1.3791 |
| 0.5698 | 576.0 | 1728 | 1.3478 |
| 0.5698 | 577.0 | 1731 | 1.3398 |
| 0.5698 | 578.0 | 1734 | 1.3408 |
| 0.5698 | 579.0 | 1737 | 1.3577 |
| 0.5698 | 580.0 | 1740 | 1.3780 |
| 0.5698 | 581.0 | 1743 | 1.3871 |
| 0.5698 | 582.0 | 1746 | 1.3754 |
| 0.5698 | 583.0 | 1749 | 1.3487 |
| 0.5698 | 584.0 | 1752 | 1.3299 |
| 0.5698 | 585.0 | 1755 | 1.3215 |
| 0.5698 | 586.0 | 1758 | 1.3004 |
| 0.5698 | 587.0 | 1761 | 1.2819 |
| 0.5698 | 588.0 | 1764 | 1.2804 |
| 0.5698 | 589.0 | 1767 | 1.2724 |
| 0.5698 | 590.0 | 1770 | 1.2975 |
| 0.5698 | 591.0 | 1773 | 1.3615 |
| 0.5698 | 592.0 | 1776 | 1.4006 |
| 0.5698 | 593.0 | 1779 | 1.4037 |
| 0.5698 | 594.0 | 1782 | 1.3882 |
| 0.5698 | 595.0 | 1785 | 1.3919 |
| 0.5698 | 596.0 | 1788 | 1.3759 |
| 0.5698 | 597.0 | 1791 | 1.3215 |
| 0.5698 | 598.0 | 1794 | 1.3130 |
| 0.5698 | 599.0 | 1797 | 1.3547 |
| 0.5698 | 600.0 | 1800 | 1.3832 |
| 0.5698 | 601.0 | 1803 | 1.3755 |
| 0.5698 | 602.0 | 1806 | 1.3555 |
| 0.5698 | 603.0 | 1809 | 1.3085 |
| 0.5698 | 604.0 | 1812 | 1.3235 |
| 0.5698 | 605.0 | 1815 | 1.3616 |
| 0.5698 | 606.0 | 1818 | 1.4128 |
| 0.5698 | 607.0 | 1821 | 1.4333 |
| 0.5698 | 608.0 | 1824 | 1.4124 |
| 0.5698 | 609.0 | 1827 | 1.3622 |
| 0.5698 | 610.0 | 1830 | 1.2583 |
| 0.5698 | 611.0 | 1833 | 1.2334 |
| 0.5698 | 612.0 | 1836 | 1.2316 |
| 0.5698 | 613.0 | 1839 | 1.2430 |
| 0.5698 | 614.0 | 1842 | 1.2659 |
| 0.5698 | 615.0 | 1845 | 1.2801 |
| 0.5698 | 616.0 | 1848 | 1.3092 |
| 0.5698 | 617.0 | 1851 | 1.3340 |
| 0.5698 | 618.0 | 1854 | 1.3543 |
| 0.5698 | 619.0 | 1857 | 1.3771 |
| 0.5698 | 620.0 | 1860 | 1.3764 |
| 0.5698 | 621.0 | 1863 | 1.3577 |
| 0.5698 | 622.0 | 1866 | 1.3255 |
| 0.5698 | 623.0 | 1869 | 1.2972 |
| 0.5698 | 624.0 | 1872 | 1.2877 |
| 0.5698 | 625.0 | 1875 | 1.3092 |
| 0.5698 | 626.0 | 1878 | 1.3348 |
| 0.5698 | 627.0 | 1881 | 1.3486 |
| 0.5698 | 628.0 | 1884 | 1.3543 |
| 0.5698 | 629.0 | 1887 | 1.3504 |
| 0.5698 | 630.0 | 1890 | 1.3544 |
| 0.5698 | 631.0 | 1893 | 1.3419 |
| 0.5698 | 632.0 | 1896 | 1.3093 |
| 0.5698 | 633.0 | 1899 | 1.2775 |
| 0.5698 | 634.0 | 1902 | 1.2783 |
| 0.5698 | 635.0 | 1905 | 1.2753 |
| 0.5698 | 636.0 | 1908 | 1.2506 |
| 0.5698 | 637.0 | 1911 | 1.2332 |
| 0.5698 | 638.0 | 1914 | 1.2763 |
| 0.5698 | 639.0 | 1917 | 1.3084 |
| 0.5698 | 640.0 | 1920 | 1.3237 |
| 0.5698 | 641.0 | 1923 | 1.3340 |
| 0.5698 | 642.0 | 1926 | 1.3339 |
| 0.5698 | 643.0 | 1929 | 1.3103 |
| 0.5698 | 644.0 | 1932 | 1.2959 |
| 0.5698 | 645.0 | 1935 | 1.2915 |
| 0.5698 | 646.0 | 1938 | 1.3321 |
| 0.5698 | 647.0 | 1941 | 1.3656 |
| 0.5698 | 648.0 | 1944 | 1.3728 |
| 0.5698 | 649.0 | 1947 | 1.3629 |
| 0.5698 | 650.0 | 1950 | 1.3502 |
| 0.5698 | 651.0 | 1953 | 1.3297 |
| 0.5698 | 652.0 | 1956 | 1.3057 |
| 0.5698 | 653.0 | 1959 | 1.3008 |
| 0.5698 | 654.0 | 1962 | 1.2932 |
| 0.5698 | 655.0 | 1965 | 1.2945 |
| 0.5698 | 656.0 | 1968 | 1.2929 |
| 0.5698 | 657.0 | 1971 | 1.3073 |
| 0.5698 | 658.0 | 1974 | 1.3311 |
| 0.5698 | 659.0 | 1977 | 1.3472 |
| 0.5698 | 660.0 | 1980 | 1.3409 |
| 0.5698 | 661.0 | 1983 | 1.3315 |
| 0.5698 | 662.0 | 1986 | 1.3154 |
| 0.5698 | 663.0 | 1989 | 1.3030 |
| 0.5698 | 664.0 | 1992 | 1.3006 |
| 0.5698 | 665.0 | 1995 | 1.2968 |
| 0.5698 | 666.0 | 1998 | 1.3045 |
| 0.5609 | 667.0 | 2001 | 1.3166 |
| 0.5609 | 668.0 | 2004 | 1.3430 |
| 0.5609 | 669.0 | 2007 | 1.3718 |
| 0.5609 | 670.0 | 2010 | 1.3945 |
| 0.5609 | 671.0 | 2013 | 1.3919 |
| 0.5609 | 672.0 | 2016 | 1.3895 |
| 0.5609 | 673.0 | 2019 | 1.3659 |
| 0.5609 | 674.0 | 2022 | 1.3276 |
| 0.5609 | 675.0 | 2025 | 1.3060 |
| 0.5609 | 676.0 | 2028 | 1.2941 |
| 0.5609 | 677.0 | 2031 | 1.2893 |
| 0.5609 | 678.0 | 2034 | 1.2937 |
| 0.5609 | 679.0 | 2037 | 1.3019 |
| 0.5609 | 680.0 | 2040 | 1.3119 |
| 0.5609 | 681.0 | 2043 | 1.3222 |
| 0.5609 | 682.0 | 2046 | 1.3238 |
| 0.5609 | 683.0 | 2049 | 1.3280 |
| 0.5609 | 684.0 | 2052 | 1.3324 |
| 0.5609 | 685.0 | 2055 | 1.3401 |
| 0.5609 | 686.0 | 2058 | 1.3452 |
| 0.5609 | 687.0 | 2061 | 1.3752 |
| 0.5609 | 688.0 | 2064 | 1.3987 |
| 0.5609 | 689.0 | 2067 | 1.4118 |
| 0.5609 | 690.0 | 2070 | 1.4179 |
| 0.5609 | 691.0 | 2073 | 1.4122 |
| 0.5609 | 692.0 | 2076 | 1.3909 |
| 0.5609 | 693.0 | 2079 | 1.3439 |
| 0.5609 | 694.0 | 2082 | 1.3072 |
| 0.5609 | 695.0 | 2085 | 1.2981 |
| 0.5609 | 696.0 | 2088 | 1.3195 |
| 0.5609 | 697.0 | 2091 | 1.3502 |
| 0.5609 | 698.0 | 2094 | 1.3783 |
| 0.5609 | 699.0 | 2097 | 1.3925 |
| 0.5609 | 700.0 | 2100 | 1.4000 |
| 0.5609 | 701.0 | 2103 | 1.3797 |
| 0.5609 | 702.0 | 2106 | 1.3620 |
| 0.5609 | 703.0 | 2109 | 1.3533 |
| 0.5609 | 704.0 | 2112 | 1.3492 |
| 0.5609 | 705.0 | 2115 | 1.3400 |
| 0.5609 | 706.0 | 2118 | 1.3346 |
| 0.5609 | 707.0 | 2121 | 1.3254 |
| 0.5609 | 708.0 | 2124 | 1.3290 |
| 0.5609 | 709.0 | 2127 | 1.3406 |
| 0.5609 | 710.0 | 2130 | 1.3619 |
| 0.5609 | 711.0 | 2133 | 1.3898 |
| 0.5609 | 712.0 | 2136 | 1.3945 |
| 0.5609 | 713.0 | 2139 | 1.3817 |
| 0.5609 | 714.0 | 2142 | 1.3686 |
| 0.5609 | 715.0 | 2145 | 1.3627 |
| 0.5609 | 716.0 | 2148 | 1.3617 |
| 0.5609 | 717.0 | 2151 | 1.3548 |
| 0.5609 | 718.0 | 2154 | 1.3464 |
| 0.5609 | 719.0 | 2157 | 1.3368 |
| 0.5609 | 720.0 | 2160 | 1.3138 |
| 0.5609 | 721.0 | 2163 | 1.3073 |
| 0.5609 | 722.0 | 2166 | 1.3203 |
| 0.5609 | 723.0 | 2169 | 1.3342 |
| 0.5609 | 724.0 | 2172 | 1.3562 |
| 0.5609 | 725.0 | 2175 | 1.3725 |
| 0.5609 | 726.0 | 2178 | 1.3748 |
| 0.5609 | 727.0 | 2181 | 1.3711 |
| 0.5609 | 728.0 | 2184 | 1.3717 |
| 0.5609 | 729.0 | 2187 | 1.3627 |
| 0.5609 | 730.0 | 2190 | 1.3515 |
| 0.5609 | 731.0 | 2193 | 1.3373 |
| 0.5609 | 732.0 | 2196 | 1.3160 |
| 0.5609 | 733.0 | 2199 | 1.3125 |
| 0.5609 | 734.0 | 2202 | 1.3301 |
| 0.5609 | 735.0 | 2205 | 1.3197 |
| 0.5609 | 736.0 | 2208 | 1.3125 |
| 0.5609 | 737.0 | 2211 | 1.3072 |
| 0.5609 | 738.0 | 2214 | 1.2798 |
| 0.5609 | 739.0 | 2217 | 1.2672 |
| 0.5609 | 740.0 | 2220 | 1.2533 |
| 0.5609 | 741.0 | 2223 | 1.2383 |
| 0.5609 | 742.0 | 2226 | 1.2450 |
| 0.5609 | 743.0 | 2229 | 1.2557 |
| 0.5609 | 744.0 | 2232 | 1.2751 |
| 0.5609 | 745.0 | 2235 | 1.3235 |
| 0.5609 | 746.0 | 2238 | 1.3708 |
| 0.5609 | 747.0 | 2241 | 1.3867 |
| 0.5609 | 748.0 | 2244 | 1.3686 |
| 0.5609 | 749.0 | 2247 | 1.3309 |
| 0.5609 | 750.0 | 2250 | 1.2811 |
| 0.5609 | 751.0 | 2253 | 1.2294 |
| 0.5609 | 752.0 | 2256 | 1.1340 |
| 0.5609 | 753.0 | 2259 | 1.1346 |
| 0.5609 | 754.0 | 2262 | 1.2078 |
| 0.5609 | 755.0 | 2265 | 1.2462 |
| 0.5609 | 756.0 | 2268 | 1.2557 |
| 0.5609 | 757.0 | 2271 | 1.2358 |
| 0.5609 | 758.0 | 2274 | 1.2225 |
| 0.5609 | 759.0 | 2277 | 1.2298 |
| 0.5609 | 760.0 | 2280 | 1.2561 |
| 0.5609 | 761.0 | 2283 | 1.2861 |
| 0.5609 | 762.0 | 2286 | 1.3017 |
| 0.5609 | 763.0 | 2289 | 1.3228 |
| 0.5609 | 764.0 | 2292 | 1.3235 |
| 0.5609 | 765.0 | 2295 | 1.3232 |
| 0.5609 | 766.0 | 2298 | 1.3236 |
| 0.5609 | 767.0 | 2301 | 1.3289 |
| 0.5609 | 768.0 | 2304 | 1.3324 |
| 0.5609 | 769.0 | 2307 | 1.3325 |
| 0.5609 | 770.0 | 2310 | 1.3282 |
| 0.5609 | 771.0 | 2313 | 1.3176 |
| 0.5609 | 772.0 | 2316 | 1.2927 |
| 0.5609 | 773.0 | 2319 | 1.2773 |
| 0.5609 | 774.0 | 2322 | 1.2617 |
| 0.5609 | 775.0 | 2325 | 1.2578 |
| 0.5609 | 776.0 | 2328 | 1.2454 |
| 0.5609 | 777.0 | 2331 | 1.2212 |
| 0.5609 | 778.0 | 2334 | 1.2459 |
| 0.5609 | 779.0 | 2337 | 1.3040 |
| 0.5609 | 780.0 | 2340 | 1.3453 |
| 0.5609 | 781.0 | 2343 | 1.3773 |
| 0.5609 | 782.0 | 2346 | 1.3942 |
| 0.5609 | 783.0 | 2349 | 1.3854 |
| 0.5609 | 784.0 | 2352 | 1.3637 |
| 0.5609 | 785.0 | 2355 | 1.3213 |
| 0.5609 | 786.0 | 2358 | 1.2795 |
| 0.5609 | 787.0 | 2361 | 1.2844 |
| 0.5609 | 788.0 | 2364 | 1.3058 |
| 0.5609 | 789.0 | 2367 | 1.3198 |
| 0.5609 | 790.0 | 2370 | 1.3251 |
| 0.5609 | 791.0 | 2373 | 1.3193 |
| 0.5609 | 792.0 | 2376 | 1.3021 |
| 0.5609 | 793.0 | 2379 | 1.3105 |
| 0.5609 | 794.0 | 2382 | 1.3310 |
| 0.5609 | 795.0 | 2385 | 1.3574 |
| 0.5609 | 796.0 | 2388 | 1.3642 |
| 0.5609 | 797.0 | 2391 | 1.3580 |
| 0.5609 | 798.0 | 2394 | 1.3255 |
| 0.5609 | 799.0 | 2397 | 1.2785 |
| 0.5609 | 800.0 | 2400 | 1.2199 |
| 0.5609 | 801.0 | 2403 | 1.1221 |
| 0.5609 | 802.0 | 2406 | 1.1233 |
| 0.5609 | 803.0 | 2409 | 1.1873 |
| 0.5609 | 804.0 | 2412 | 1.3435 |
| 0.5609 | 805.0 | 2415 | 1.3522 |
| 0.5609 | 806.0 | 2418 | 1.3800 |
| 0.5609 | 807.0 | 2421 | 1.3976 |
| 0.5609 | 808.0 | 2424 | 1.3899 |
| 0.5609 | 809.0 | 2427 | 1.3480 |
| 0.5609 | 810.0 | 2430 | 1.1934 |
| 0.5609 | 811.0 | 2433 | 1.1259 |
| 0.5609 | 812.0 | 2436 | 1.1836 |
| 0.5609 | 813.0 | 2439 | 1.2207 |
| 0.5609 | 814.0 | 2442 | 1.3393 |
| 0.5609 | 815.0 | 2445 | 1.4465 |
| 0.5609 | 816.0 | 2448 | 1.4166 |
| 0.5609 | 817.0 | 2451 | 1.3814 |
| 0.5609 | 818.0 | 2454 | 1.3636 |
| 0.5609 | 819.0 | 2457 | 1.3334 |
| 0.5609 | 820.0 | 2460 | 1.2854 |
| 0.5609 | 821.0 | 2463 | 1.2674 |
| 0.5609 | 822.0 | 2466 | 1.2533 |
| 0.5609 | 823.0 | 2469 | 1.2967 |
| 0.5609 | 824.0 | 2472 | 1.3504 |
| 0.5609 | 825.0 | 2475 | 1.3052 |
| 0.5609 | 826.0 | 2478 | 1.2894 |
| 0.5609 | 827.0 | 2481 | 1.3342 |
| 0.5609 | 828.0 | 2484 | 1.4139 |
| 0.5609 | 829.0 | 2487 | 1.4048 |
| 0.5609 | 830.0 | 2490 | 1.3678 |
| 0.5609 | 831.0 | 2493 | 1.3604 |
| 0.5609 | 832.0 | 2496 | 1.3533 |
| 0.5609 | 833.0 | 2499 | 1.3609 |
| 0.5608 | 834.0 | 2502 | 1.3909 |
| 0.5608 | 835.0 | 2505 | 1.4105 |
| 0.5608 | 836.0 | 2508 | 1.4294 |
| 0.5608 | 837.0 | 2511 | 1.4313 |
| 0.5608 | 838.0 | 2514 | 1.4112 |
| 0.5608 | 839.0 | 2517 | 1.3844 |
| 0.5608 | 840.0 | 2520 | 1.3769 |
| 0.5608 | 841.0 | 2523 | 1.3679 |
| 0.5608 | 842.0 | 2526 | 1.3449 |
| 0.5608 | 843.0 | 2529 | 1.3389 |
| 0.5608 | 844.0 | 2532 | 1.3366 |
| 0.5608 | 845.0 | 2535 | 1.3453 |
| 0.5608 | 846.0 | 2538 | 1.3726 |
| 0.5608 | 847.0 | 2541 | 1.3670 |
| 0.5608 | 848.0 | 2544 | 1.3503 |
| 0.5608 | 849.0 | 2547 | 1.3262 |
| 0.5608 | 850.0 | 2550 | 1.3017 |
| 0.5608 | 851.0 | 2553 | 1.2902 |
| 0.5608 | 852.0 | 2556 | 1.2662 |
| 0.5608 | 853.0 | 2559 | 1.2408 |
| 0.5608 | 854.0 | 2562 | 1.2208 |
| 0.5608 | 855.0 | 2565 | 1.2003 |
| 0.5608 | 856.0 | 2568 | 1.2038 |
| 0.5608 | 857.0 | 2571 | 1.2344 |
| 0.5608 | 858.0 | 2574 | 1.2968 |
| 0.5608 | 859.0 | 2577 | 1.3401 |
| 0.5608 | 860.0 | 2580 | 1.3674 |
| 0.5608 | 861.0 | 2583 | 1.3837 |
| 0.5608 | 862.0 | 2586 | 1.3753 |
| 0.5608 | 863.0 | 2589 | 1.3121 |
| 0.5608 | 864.0 | 2592 | 1.2480 |
| 0.5608 | 865.0 | 2595 | 1.2293 |
| 0.5608 | 866.0 | 2598 | 1.2000 |
| 0.5608 | 867.0 | 2601 | 1.2027 |
| 0.5608 | 868.0 | 2604 | 1.2281 |
| 0.5608 | 869.0 | 2607 | 1.2710 |
| 0.5608 | 870.0 | 2610 | 1.3535 |
| 0.5608 | 871.0 | 2613 | 1.3937 |
| 0.5608 | 872.0 | 2616 | 1.4003 |
| 0.5608 | 873.0 | 2619 | 1.3758 |
| 0.5608 | 874.0 | 2622 | 1.3253 |
| 0.5608 | 875.0 | 2625 | 1.2449 |
| 0.5608 | 876.0 | 2628 | 1.1745 |
| 0.5608 | 877.0 | 2631 | 1.1366 |
| 0.5608 | 878.0 | 2634 | 1.1655 |
| 0.5608 | 879.0 | 2637 | 1.2965 |
| 0.5608 | 880.0 | 2640 | 1.3166 |
| 0.5608 | 881.0 | 2643 | 1.3225 |
| 0.5608 | 882.0 | 2646 | 1.3141 |
| 0.5608 | 883.0 | 2649 | 1.2992 |
| 0.5608 | 884.0 | 2652 | 1.2834 |
| 0.5608 | 885.0 | 2655 | 1.2698 |
| 0.5608 | 886.0 | 2658 | 1.2829 |
| 0.5608 | 887.0 | 2661 | 1.3100 |
| 0.5608 | 888.0 | 2664 | 1.3314 |
| 0.5608 | 889.0 | 2667 | 1.3393 |
| 0.5608 | 890.0 | 2670 | 1.3354 |
| 0.5608 | 891.0 | 2673 | 1.3278 |
| 0.5608 | 892.0 | 2676 | 1.3333 |
| 0.5608 | 893.0 | 2679 | 1.3443 |
| 0.5608 | 894.0 | 2682 | 1.3343 |
| 0.5608 | 895.0 | 2685 | 1.3148 |
| 0.5608 | 896.0 | 2688 | 1.2858 |
| 0.5608 | 897.0 | 2691 | 1.2698 |
| 0.5608 | 898.0 | 2694 | 1.2777 |
| 0.5608 | 899.0 | 2697 | 1.2901 |
| 0.5608 | 900.0 | 2700 | 1.3008 |
| 0.5608 | 901.0 | 2703 | 1.3260 |
| 0.5608 | 902.0 | 2706 | 1.3440 |
| 0.5608 | 903.0 | 2709 | 1.3438 |
| 0.5608 | 904.0 | 2712 | 1.3380 |
| 0.5608 | 905.0 | 2715 | 1.3237 |
| 0.5608 | 906.0 | 2718 | 1.3145 |
| 0.5608 | 907.0 | 2721 | 1.3022 |
| 0.5608 | 908.0 | 2724 | 1.2902 |
| 0.5608 | 909.0 | 2727 | 1.2793 |
| 0.5608 | 910.0 | 2730 | 1.2909 |
| 0.5608 | 911.0 | 2733 | 1.3084 |
| 0.5608 | 912.0 | 2736 | 1.3185 |
| 0.5608 | 913.0 | 2739 | 1.3250 |
| 0.5608 | 914.0 | 2742 | 1.3412 |
| 0.5608 | 915.0 | 2745 | 1.3491 |
| 0.5608 | 916.0 | 2748 | 1.3561 |
| 0.5608 | 917.0 | 2751 | 1.3675 |
| 0.5608 | 918.0 | 2754 | 1.3759 |
| 0.5608 | 919.0 | 2757 | 1.3829 |
| 0.5608 | 920.0 | 2760 | 1.3805 |
| 0.5608 | 921.0 | 2763 | 1.3669 |
| 0.5608 | 922.0 | 2766 | 1.3605 |
| 0.5608 | 923.0 | 2769 | 1.3455 |
| 0.5608 | 924.0 | 2772 | 1.3373 |
| 0.5608 | 925.0 | 2775 | 1.3440 |
| 0.5608 | 926.0 | 2778 | 1.3408 |
| 0.5608 | 927.0 | 2781 | 1.3424 |
| 0.5608 | 928.0 | 2784 | 1.3414 |
| 0.5608 | 929.0 | 2787 | 1.3383 |
| 0.5608 | 930.0 | 2790 | 1.3371 |
| 0.5608 | 931.0 | 2793 | 1.3406 |
| 0.5608 | 932.0 | 2796 | 1.3432 |
| 0.5608 | 933.0 | 2799 | 1.3564 |
| 0.5608 | 934.0 | 2802 | 1.3773 |
| 0.5608 | 935.0 | 2805 | 1.3931 |
| 0.5608 | 936.0 | 2808 | 1.4030 |
| 0.5608 | 937.0 | 2811 | 1.3998 |
| 0.5608 | 938.0 | 2814 | 1.3955 |
| 0.5608 | 939.0 | 2817 | 1.3937 |
| 0.5608 | 940.0 | 2820 | 1.3801 |
| 0.5608 | 941.0 | 2823 | 1.3729 |
| 0.5608 | 942.0 | 2826 | 1.3679 |
| 0.5608 | 943.0 | 2829 | 1.3550 |
| 0.5608 | 944.0 | 2832 | 1.3437 |
| 0.5608 | 945.0 | 2835 | 1.3347 |
| 0.5608 | 946.0 | 2838 | 1.3220 |
| 0.5608 | 947.0 | 2841 | 1.2968 |
| 0.5608 | 948.0 | 2844 | 1.2799 |
| 0.5608 | 949.0 | 2847 | 1.2549 |
| 0.5608 | 950.0 | 2850 | 1.2459 |
| 0.5608 | 951.0 | 2853 | 1.2461 |
| 0.5608 | 952.0 | 2856 | 1.2299 |
| 0.5608 | 953.0 | 2859 | 1.2177 |
| 0.5608 | 954.0 | 2862 | 1.2640 |
| 0.5608 | 955.0 | 2865 | 1.2997 |
| 0.5608 | 956.0 | 2868 | 1.2971 |
| 0.5608 | 957.0 | 2871 | 1.2788 |
| 0.5608 | 958.0 | 2874 | 1.2858 |
| 0.5608 | 959.0 | 2877 | 1.2694 |
| 0.5608 | 960.0 | 2880 | 1.2542 |
| 0.5608 | 961.0 | 2883 | 1.2733 |
| 0.5608 | 962.0 | 2886 | 1.3086 |
| 0.5608 | 963.0 | 2889 | 1.3123 |
| 0.5608 | 964.0 | 2892 | 1.3039 |
| 0.5608 | 965.0 | 2895 | 1.2834 |
| 0.5608 | 966.0 | 2898 | 1.2809 |
| 0.5608 | 967.0 | 2901 | 1.2696 |
| 0.5608 | 968.0 | 2904 | 1.2567 |
| 0.5608 | 969.0 | 2907 | 1.2497 |
| 0.5608 | 970.0 | 2910 | 1.2639 |
| 0.5608 | 971.0 | 2913 | 1.2809 |
| 0.5608 | 972.0 | 2916 | 1.2881 |
| 0.5608 | 973.0 | 2919 | 1.3082 |
| 0.5608 | 974.0 | 2922 | 1.3283 |
| 0.5608 | 975.0 | 2925 | 1.3331 |
| 0.5608 | 976.0 | 2928 | 1.3384 |
| 0.5608 | 977.0 | 2931 | 1.3405 |
| 0.5608 | 978.0 | 2934 | 1.3515 |
| 0.5608 | 979.0 | 2937 | 1.3734 |
| 0.5608 | 980.0 | 2940 | 1.3875 |
| 0.5608 | 981.0 | 2943 | 1.3766 |
| 0.5608 | 982.0 | 2946 | 1.3530 |
| 0.5608 | 983.0 | 2949 | 1.3309 |
| 0.5608 | 984.0 | 2952 | 1.3178 |
| 0.5608 | 985.0 | 2955 | 1.2963 |
| 0.5608 | 986.0 | 2958 | 1.2672 |
| 0.5608 | 987.0 | 2961 | 1.2697 |
| 0.5608 | 988.0 | 2964 | 1.2620 |
| 0.5608 | 989.0 | 2967 | 1.2438 |
| 0.5608 | 990.0 | 2970 | 1.2488 |
| 0.5608 | 991.0 | 2973 | 1.2630 |
| 0.5608 | 992.0 | 2976 | 1.2496 |
| 0.5608 | 993.0 | 2979 | 1.2646 |
| 0.5608 | 994.0 | 2982 | 1.3051 |
| 0.5608 | 995.0 | 2985 | 1.3445 |
| 0.5608 | 996.0 | 2988 | 1.3551 |
| 0.5608 | 997.0 | 2991 | 1.3600 |
| 0.5608 | 998.0 | 2994 | 1.3566 |
| 0.5608 | 999.0 | 2997 | 1.3485 |
| 0.5596 | 1000.0 | 3000 | 1.3403 |
| 0.5596 | 1001.0 | 3003 | 1.3328 |
| 0.5596 | 1002.0 | 3006 | 1.3367 |
| 0.5596 | 1003.0 | 3009 | 1.3306 |
| 0.5596 | 1004.0 | 3012 | 1.3026 |
| 0.5596 | 1005.0 | 3015 | 1.2606 |
| 0.5596 | 1006.0 | 3018 | 1.2459 |
| 0.5596 | 1007.0 | 3021 | 1.2332 |
| 0.5596 | 1008.0 | 3024 | 1.2062 |
| 0.5596 | 1009.0 | 3027 | 1.1985 |
| 0.5596 | 1010.0 | 3030 | 1.1937 |
| 0.5596 | 1011.0 | 3033 | 1.1920 |
| 0.5596 | 1012.0 | 3036 | 1.1953 |
| 0.5596 | 1013.0 | 3039 | 1.1919 |
| 0.5596 | 1014.0 | 3042 | 1.1809 |
| 0.5596 | 1015.0 | 3045 | 1.1649 |
| 0.5596 | 1016.0 | 3048 | 1.1612 |
| 0.5596 | 1017.0 | 3051 | 1.1667 |
| 0.5596 | 1018.0 | 3054 | 1.1732 |
| 0.5596 | 1019.0 | 3057 | 1.1847 |
| 0.5596 | 1020.0 | 3060 | 1.1990 |
| 0.5596 | 1021.0 | 3063 | 1.2160 |
| 0.5596 | 1022.0 | 3066 | 1.2672 |
| 0.5596 | 1023.0 | 3069 | 1.3042 |
| 0.5596 | 1024.0 | 3072 | 1.3417 |
| 0.5596 | 1025.0 | 3075 | 1.3652 |
| 0.5596 | 1026.0 | 3078 | 1.3665 |
| 0.5596 | 1027.0 | 3081 | 1.3571 |
| 0.5596 | 1028.0 | 3084 | 1.3403 |
| 0.5596 | 1029.0 | 3087 | 1.3310 |
| 0.5596 | 1030.0 | 3090 | 1.3274 |
| 0.5596 | 1031.0 | 3093 | 1.3228 |
| 0.5596 | 1032.0 | 3096 | 1.2960 |
| 0.5596 | 1033.0 | 3099 | 1.2831 |
| 0.5596 | 1034.0 | 3102 | 1.2817 |
| 0.5596 | 1035.0 | 3105 | 1.2808 |
| 0.5596 | 1036.0 | 3108 | 1.2747 |
| 0.5596 | 1037.0 | 3111 | 1.2732 |
| 0.5596 | 1038.0 | 3114 | 1.2738 |
| 0.5596 | 1039.0 | 3117 | 1.2797 |
| 0.5596 | 1040.0 | 3120 | 1.2912 |
| 0.5596 | 1041.0 | 3123 | 1.3257 |
| 0.5596 | 1042.0 | 3126 | 1.3495 |
| 0.5596 | 1043.0 | 3129 | 1.3620 |
| 0.5596 | 1044.0 | 3132 | 1.3673 |
| 0.5596 | 1045.0 | 3135 | 1.3723 |
| 0.5596 | 1046.0 | 3138 | 1.3709 |
| 0.5596 | 1047.0 | 3141 | 1.3701 |
| 0.5596 | 1048.0 | 3144 | 1.3690 |
| 0.5596 | 1049.0 | 3147 | 1.3811 |
| 0.5596 | 1050.0 | 3150 | 1.3936 |
| 0.5596 | 1051.0 | 3153 | 1.3898 |
| 0.5596 | 1052.0 | 3156 | 1.3976 |
| 0.5596 | 1053.0 | 3159 | 1.3920 |
| 0.5596 | 1054.0 | 3162 | 1.3665 |
| 0.5596 | 1055.0 | 3165 | 1.3330 |
| 0.5596 | 1056.0 | 3168 | 1.3195 |
| 0.5596 | 1057.0 | 3171 | 1.3350 |
| 0.5596 | 1058.0 | 3174 | 1.3444 |
| 0.5596 | 1059.0 | 3177 | 1.3567 |
| 0.5596 | 1060.0 | 3180 | 1.3821 |
| 0.5596 | 1061.0 | 3183 | 1.3965 |
| 0.5596 | 1062.0 | 3186 | 1.4039 |
| 0.5596 | 1063.0 | 3189 | 1.4126 |
| 0.5596 | 1064.0 | 3192 | 1.4127 |
| 0.5596 | 1065.0 | 3195 | 1.4188 |
| 0.5596 | 1066.0 | 3198 | 1.4220 |
| 0.5596 | 1067.0 | 3201 | 1.4240 |
| 0.5596 | 1068.0 | 3204 | 1.4197 |
| 0.5596 | 1069.0 | 3207 | 1.4138 |
| 0.5596 | 1070.0 | 3210 | 1.4155 |
| 0.5596 | 1071.0 | 3213 | 1.4155 |
| 0.5596 | 1072.0 | 3216 | 1.4227 |
| 0.5596 | 1073.0 | 3219 | 1.4209 |
| 0.5596 | 1074.0 | 3222 | 1.4186 |
| 0.5596 | 1075.0 | 3225 | 1.4118 |
| 0.5596 | 1076.0 | 3228 | 1.3992 |
| 0.5596 | 1077.0 | 3231 | 1.3924 |
| 0.5596 | 1078.0 | 3234 | 1.3884 |
| 0.5596 | 1079.0 | 3237 | 1.3913 |
| 0.5596 | 1080.0 | 3240 | 1.3882 |
| 0.5596 | 1081.0 | 3243 | 1.3765 |
| 0.5596 | 1082.0 | 3246 | 1.3725 |
| 0.5596 | 1083.0 | 3249 | 1.3893 |
| 0.5596 | 1084.0 | 3252 | 1.3933 |
| 0.5596 | 1085.0 | 3255 | 1.4005 |
| 0.5596 | 1086.0 | 3258 | 1.4017 |
| 0.5596 | 1087.0 | 3261 | 1.4086 |
| 0.5596 | 1088.0 | 3264 | 1.4195 |
| 0.5596 | 1089.0 | 3267 | 1.4274 |
| 0.5596 | 1090.0 | 3270 | 1.4258 |
| 0.5596 | 1091.0 | 3273 | 1.4179 |
| 0.5596 | 1092.0 | 3276 | 1.4090 |
| 0.5596 | 1093.0 | 3279 | 1.3901 |
| 0.5596 | 1094.0 | 3282 | 1.3714 |
| 0.5596 | 1095.0 | 3285 | 1.3512 |
| 0.5596 | 1096.0 | 3288 | 1.3355 |
| 0.5596 | 1097.0 | 3291 | 1.3368 |
| 0.5596 | 1098.0 | 3294 | 1.3421 |
| 0.5596 | 1099.0 | 3297 | 1.3195 |
| 0.5596 | 1100.0 | 3300 | 1.2919 |
| 0.5596 | 1101.0 | 3303 | 1.2551 |
| 0.5596 | 1102.0 | 3306 | 1.2370 |
| 0.5596 | 1103.0 | 3309 | 1.2445 |
| 0.5596 | 1104.0 | 3312 | 1.2213 |
| 0.5596 | 1105.0 | 3315 | 1.2361 |
| 0.5596 | 1106.0 | 3318 | 1.3104 |
| 0.5596 | 1107.0 | 3321 | 1.3632 |
| 0.5596 | 1108.0 | 3324 | 1.3822 |
| 0.5596 | 1109.0 | 3327 | 1.3887 |
| 0.5596 | 1110.0 | 3330 | 1.3920 |
| 0.5596 | 1111.0 | 3333 | 1.3876 |
| 0.5596 | 1112.0 | 3336 | 1.3874 |
| 0.5596 | 1113.0 | 3339 | 1.3850 |
| 0.5596 | 1114.0 | 3342 | 1.3685 |
| 0.5596 | 1115.0 | 3345 | 1.3439 |
| 0.5596 | 1116.0 | 3348 | 1.3327 |
| 0.5596 | 1117.0 | 3351 | 1.3158 |
| 0.5596 | 1118.0 | 3354 | 1.3046 |
| 0.5596 | 1119.0 | 3357 | 1.2996 |
| 0.5596 | 1120.0 | 3360 | 1.2958 |
| 0.5596 | 1121.0 | 3363 | 1.2871 |
| 0.5596 | 1122.0 | 3366 | 1.2576 |
| 0.5596 | 1123.0 | 3369 | 1.2534 |
| 0.5596 | 1124.0 | 3372 | 1.2344 |
| 0.5596 | 1125.0 | 3375 | 1.2290 |
| 0.5596 | 1126.0 | 3378 | 1.2363 |
| 0.5596 | 1127.0 | 3381 | 1.2271 |
| 0.5596 | 1128.0 | 3384 | 1.2219 |
| 0.5596 | 1129.0 | 3387 | 1.2365 |
| 0.5596 | 1130.0 | 3390 | 1.2537 |
| 0.5596 | 1131.0 | 3393 | 1.2754 |
| 0.5596 | 1132.0 | 3396 | 1.2962 |
| 0.5596 | 1133.0 | 3399 | 1.3161 |
| 0.5596 | 1134.0 | 3402 | 1.3244 |
| 0.5596 | 1135.0 | 3405 | 1.3309 |
| 0.5596 | 1136.0 | 3408 | 1.3317 |
| 0.5596 | 1137.0 | 3411 | 1.3369 |
| 0.5596 | 1138.0 | 3414 | 1.3336 |
| 0.5596 | 1139.0 | 3417 | 1.3099 |
| 0.5596 | 1140.0 | 3420 | 1.2747 |
| 0.5596 | 1141.0 | 3423 | 1.2515 |
| 0.5596 | 1142.0 | 3426 | 1.2653 |
| 0.5596 | 1143.0 | 3429 | 1.2975 |
| 0.5596 | 1144.0 | 3432 | 1.3184 |
| 0.5596 | 1145.0 | 3435 | 1.3373 |
| 0.5596 | 1146.0 | 3438 | 1.3265 |
| 0.5596 | 1147.0 | 3441 | 1.3195 |
| 0.5596 | 1148.0 | 3444 | 1.3177 |
| 0.5596 | 1149.0 | 3447 | 1.3045 |
| 0.5596 | 1150.0 | 3450 | 1.3045 |
| 0.5596 | 1151.0 | 3453 | 1.3020 |
| 0.5596 | 1152.0 | 3456 | 1.3021 |
| 0.5596 | 1153.0 | 3459 | 1.3238 |
| 0.5596 | 1154.0 | 3462 | 1.3351 |
| 0.5596 | 1155.0 | 3465 | 1.3334 |
| 0.5596 | 1156.0 | 3468 | 1.3274 |
| 0.5596 | 1157.0 | 3471 | 1.3276 |
| 0.5596 | 1158.0 | 3474 | 1.3119 |
| 0.5596 | 1159.0 | 3477 | 1.2913 |
| 0.5596 | 1160.0 | 3480 | 1.2919 |
| 0.5596 | 1161.0 | 3483 | 1.2927 |
| 0.5596 | 1162.0 | 3486 | 1.3079 |
| 0.5596 | 1163.0 | 3489 | 1.3195 |
| 0.5596 | 1164.0 | 3492 | 1.3286 |
| 0.5596 | 1165.0 | 3495 | 1.3375 |
| 0.5596 | 1166.0 | 3498 | 1.3493 |
| 0.5594 | 1167.0 | 3501 | 1.3599 |
| 0.5594 | 1168.0 | 3504 | 1.3644 |
| 0.5594 | 1169.0 | 3507 | 1.3595 |
| 0.5594 | 1170.0 | 3510 | 1.3476 |
| 0.5594 | 1171.0 | 3513 | 1.3464 |
| 0.5594 | 1172.0 | 3516 | 1.3592 |
| 0.5594 | 1173.0 | 3519 | 1.3673 |
| 0.5594 | 1174.0 | 3522 | 1.3682 |
| 0.5594 | 1175.0 | 3525 | 1.3569 |
| 0.5594 | 1176.0 | 3528 | 1.3434 |
| 0.5594 | 1177.0 | 3531 | 1.3439 |
| 0.5594 | 1178.0 | 3534 | 1.3386 |
| 0.5594 | 1179.0 | 3537 | 1.3180 |
| 0.5594 | 1180.0 | 3540 | 1.2994 |
| 0.5594 | 1181.0 | 3543 | 1.2888 |
| 0.5594 | 1182.0 | 3546 | 1.2911 |
| 0.5594 | 1183.0 | 3549 | 1.2966 |
| 0.5594 | 1184.0 | 3552 | 1.2888 |
| 0.5594 | 1185.0 | 3555 | 1.2784 |
| 0.5594 | 1186.0 | 3558 | 1.2811 |
| 0.5594 | 1187.0 | 3561 | 1.2813 |
| 0.5594 | 1188.0 | 3564 | 1.2797 |
| 0.5594 | 1189.0 | 3567 | 1.2683 |
| 0.5594 | 1190.0 | 3570 | 1.2736 |
| 0.5594 | 1191.0 | 3573 | 1.2614 |
| 0.5594 | 1192.0 | 3576 | 1.2485 |
| 0.5594 | 1193.0 | 3579 | 1.2446 |
| 0.5594 | 1194.0 | 3582 | 1.2077 |
| 0.5594 | 1195.0 | 3585 | 1.1880 |
| 0.5594 | 1196.0 | 3588 | 1.1797 |
| 0.5594 | 1197.0 | 3591 | 1.1750 |
| 0.5594 | 1198.0 | 3594 | 1.1964 |
| 0.5594 | 1199.0 | 3597 | 1.2570 |
| 0.5594 | 1200.0 | 3600 | 1.3173 |
| 0.5594 | 1201.0 | 3603 | 1.3393 |
| 0.5594 | 1202.0 | 3606 | 1.3465 |
| 0.5594 | 1203.0 | 3609 | 1.3254 |
| 0.5594 | 1204.0 | 3612 | 1.3003 |
| 0.5594 | 1205.0 | 3615 | 1.2560 |
| 0.5594 | 1206.0 | 3618 | 1.2008 |
| 0.5594 | 1207.0 | 3621 | 1.1804 |
| 0.5594 | 1208.0 | 3624 | 1.1725 |
| 0.5594 | 1209.0 | 3627 | 1.1634 |
| 0.5594 | 1210.0 | 3630 | 1.1744 |
| 0.5594 | 1211.0 | 3633 | 1.1912 |
| 0.5594 | 1212.0 | 3636 | 1.2141 |
| 0.5594 | 1213.0 | 3639 | 1.2444 |
| 0.5594 | 1214.0 | 3642 | 1.2703 |
| 0.5594 | 1215.0 | 3645 | 1.2812 |
| 0.5594 | 1216.0 | 3648 | 1.2849 |
| 0.5594 | 1217.0 | 3651 | 1.2871 |
| 0.5594 | 1218.0 | 3654 | 1.2800 |
| 0.5594 | 1219.0 | 3657 | 1.2755 |
| 0.5594 | 1220.0 | 3660 | 1.2668 |
| 0.5594 | 1221.0 | 3663 | 1.2512 |
| 0.5594 | 1222.0 | 3666 | 1.2390 |
| 0.5594 | 1223.0 | 3669 | 1.2268 |
| 0.5594 | 1224.0 | 3672 | 1.2071 |
| 0.5594 | 1225.0 | 3675 | 1.1804 |
| 0.5594 | 1226.0 | 3678 | 1.1572 |
| 0.5594 | 1227.0 | 3681 | 1.1618 |
| 0.5594 | 1228.0 | 3684 | 1.1741 |
| 0.5594 | 1229.0 | 3687 | 1.1867 |
| 0.5594 | 1230.0 | 3690 | 1.1978 |
| 0.5594 | 1231.0 | 3693 | 1.2180 |
| 0.5594 | 1232.0 | 3696 | 1.2379 |
| 0.5594 | 1233.0 | 3699 | 1.2486 |
| 0.5594 | 1234.0 | 3702 | 1.2526 |
| 0.5594 | 1235.0 | 3705 | 1.2632 |
| 0.5594 | 1236.0 | 3708 | 1.2866 |
| 0.5594 | 1237.0 | 3711 | 1.2903 |
| 0.5594 | 1238.0 | 3714 | 1.2655 |
| 0.5594 | 1239.0 | 3717 | 1.2452 |
| 0.5594 | 1240.0 | 3720 | 1.2348 |
| 0.5594 | 1241.0 | 3723 | 1.1997 |
| 0.5594 | 1242.0 | 3726 | 1.1615 |
| 0.5594 | 1243.0 | 3729 | 1.1294 |
| 0.5594 | 1244.0 | 3732 | 1.1171 |
| 0.5594 | 1245.0 | 3735 | 1.1613 |
| 0.5594 | 1246.0 | 3738 | 1.2428 |
| 0.5594 | 1247.0 | 3741 | 1.2627 |
| 0.5594 | 1248.0 | 3744 | 1.2525 |
| 0.5594 | 1249.0 | 3747 | 1.2029 |
| 0.5594 | 1250.0 | 3750 | 1.1155 |
| 0.5594 | 1251.0 | 3753 | 1.0784 |
| 0.5594 | 1252.0 | 3756 | 1.0683 |
| 0.5594 | 1253.0 | 3759 | 1.0901 |
| 0.5594 | 1254.0 | 3762 | 1.1788 |
| 0.5594 | 1255.0 | 3765 | 1.2079 |
| 0.5594 | 1256.0 | 3768 | 1.2129 |
| 0.5594 | 1257.0 | 3771 | 1.2088 |
| 0.5594 | 1258.0 | 3774 | 1.1948 |
| 0.5594 | 1259.0 | 3777 | 1.1811 |
| 0.5594 | 1260.0 | 3780 | 1.1757 |
| 0.5594 | 1261.0 | 3783 | 1.1764 |
| 0.5594 | 1262.0 | 3786 | 1.1673 |
| 0.5594 | 1263.0 | 3789 | 1.1421 |
| 0.5594 | 1264.0 | 3792 | 1.1351 |
| 0.5594 | 1265.0 | 3795 | 1.1570 |
| 0.5594 | 1266.0 | 3798 | 1.1854 |
| 0.5594 | 1267.0 | 3801 | 1.1974 |
| 0.5594 | 1268.0 | 3804 | 1.2039 |
| 0.5594 | 1269.0 | 3807 | 1.1966 |
| 0.5594 | 1270.0 | 3810 | 1.2079 |
| 0.5594 | 1271.0 | 3813 | 1.2104 |
| 0.5594 | 1272.0 | 3816 | 1.2171 |
| 0.5594 | 1273.0 | 3819 | 1.2335 |
| 0.5594 | 1274.0 | 3822 | 1.2483 |
| 0.5594 | 1275.0 | 3825 | 1.2607 |
| 0.5594 | 1276.0 | 3828 | 1.2586 |
| 0.5594 | 1277.0 | 3831 | 1.2527 |
| 0.5594 | 1278.0 | 3834 | 1.2457 |
| 0.5594 | 1279.0 | 3837 | 1.2451 |
| 0.5594 | 1280.0 | 3840 | 1.2669 |
| 0.5594 | 1281.0 | 3843 | 1.2651 |
| 0.5594 | 1282.0 | 3846 | 1.2585 |
| 0.5594 | 1283.0 | 3849 | 1.2459 |
| 0.5594 | 1284.0 | 3852 | 1.2272 |
| 0.5594 | 1285.0 | 3855 | 1.2195 |
| 0.5594 | 1286.0 | 3858 | 1.2154 |
| 0.5594 | 1287.0 | 3861 | 1.2234 |
| 0.5594 | 1288.0 | 3864 | 1.2386 |
| 0.5594 | 1289.0 | 3867 | 1.2574 |
| 0.5594 | 1290.0 | 3870 | 1.2844 |
| 0.5594 | 1291.0 | 3873 | 1.3160 |
| 0.5594 | 1292.0 | 3876 | 1.3283 |
| 0.5594 | 1293.0 | 3879 | 1.3256 |
| 0.5594 | 1294.0 | 3882 | 1.3101 |
| 0.5594 | 1295.0 | 3885 | 1.2981 |
| 0.5594 | 1296.0 | 3888 | 1.2863 |
| 0.5594 | 1297.0 | 3891 | 1.2822 |
| 0.5594 | 1298.0 | 3894 | 1.2751 |
| 0.5594 | 1299.0 | 3897 | 1.2609 |
| 0.5594 | 1300.0 | 3900 | 1.2539 |
| 0.5594 | 1301.0 | 3903 | 1.2455 |
| 0.5594 | 1302.0 | 3906 | 1.2458 |
| 0.5594 | 1303.0 | 3909 | 1.2390 |
| 0.5594 | 1304.0 | 3912 | 1.2530 |
| 0.5594 | 1305.0 | 3915 | 1.2605 |
| 0.5594 | 1306.0 | 3918 | 1.2669 |
| 0.5594 | 1307.0 | 3921 | 1.2699 |
| 0.5594 | 1308.0 | 3924 | 1.2581 |
| 0.5594 | 1309.0 | 3927 | 1.2481 |
| 0.5594 | 1310.0 | 3930 | 1.2469 |
| 0.5594 | 1311.0 | 3933 | 1.2540 |
| 0.5594 | 1312.0 | 3936 | 1.2708 |
| 0.5594 | 1313.0 | 3939 | 1.2828 |
| 0.5594 | 1314.0 | 3942 | 1.2897 |
| 0.5594 | 1315.0 | 3945 | 1.2939 |
| 0.5594 | 1316.0 | 3948 | 1.2995 |
| 0.5594 | 1317.0 | 3951 | 1.3066 |
| 0.5594 | 1318.0 | 3954 | 1.3168 |
| 0.5594 | 1319.0 | 3957 | 1.3175 |
| 0.5594 | 1320.0 | 3960 | 1.3122 |
| 0.5594 | 1321.0 | 3963 | 1.3059 |
| 0.5594 | 1322.0 | 3966 | 1.2981 |
| 0.5594 | 1323.0 | 3969 | 1.2889 |
| 0.5594 | 1324.0 | 3972 | 1.2831 |
| 0.5594 | 1325.0 | 3975 | 1.2885 |
| 0.5594 | 1326.0 | 3978 | 1.2866 |
| 0.5594 | 1327.0 | 3981 | 1.2813 |
| 0.5594 | 1328.0 | 3984 | 1.2779 |
| 0.5594 | 1329.0 | 3987 | 1.2776 |
| 0.5594 | 1330.0 | 3990 | 1.2799 |
| 0.5594 | 1331.0 | 3993 | 1.2826 |
| 0.5594 | 1332.0 | 3996 | 1.2839 |
| 0.5594 | 1333.0 | 3999 | 1.2864 |
| 0.5596 | 1334.0 | 4002 | 1.2831 |
| 0.5596 | 1335.0 | 4005 | 1.2768 |
| 0.5596 | 1336.0 | 4008 | 1.2694 |
| 0.5596 | 1337.0 | 4011 | 1.2594 |
| 0.5596 | 1338.0 | 4014 | 1.2453 |
| 0.5596 | 1339.0 | 4017 | 1.2447 |
| 0.5596 | 1340.0 | 4020 | 1.2359 |
| 0.5596 | 1341.0 | 4023 | 1.2253 |
| 0.5596 | 1342.0 | 4026 | 1.2114 |
| 0.5596 | 1343.0 | 4029 | 1.2037 |
| 0.5596 | 1344.0 | 4032 | 1.1957 |
| 0.5596 | 1345.0 | 4035 | 1.2045 |
| 0.5596 | 1346.0 | 4038 | 1.2123 |
| 0.5596 | 1347.0 | 4041 | 1.2362 |
| 0.5596 | 1348.0 | 4044 | 1.2613 |
| 0.5596 | 1349.0 | 4047 | 1.2745 |
| 0.5596 | 1350.0 | 4050 | 1.2848 |
| 0.5596 | 1351.0 | 4053 | 1.2939 |
| 0.5596 | 1352.0 | 4056 | 1.2986 |
| 0.5596 | 1353.0 | 4059 | 1.2994 |
| 0.5596 | 1354.0 | 4062 | 1.3032 |
| 0.5596 | 1355.0 | 4065 | 1.3034 |
| 0.5596 | 1356.0 | 4068 | 1.3160 |
| 0.5596 | 1357.0 | 4071 | 1.3207 |
| 0.5596 | 1358.0 | 4074 | 1.3250 |
| 0.5596 | 1359.0 | 4077 | 1.3295 |
| 0.5596 | 1360.0 | 4080 | 1.3291 |
| 0.5596 | 1361.0 | 4083 | 1.3191 |
| 0.5596 | 1362.0 | 4086 | 1.3077 |
| 0.5596 | 1363.0 | 4089 | 1.3023 |
| 0.5596 | 1364.0 | 4092 | 1.2966 |
| 0.5596 | 1365.0 | 4095 | 1.2871 |
| 0.5596 | 1366.0 | 4098 | 1.2758 |
| 0.5596 | 1367.0 | 4101 | 1.2703 |
| 0.5596 | 1368.0 | 4104 | 1.2790 |
| 0.5596 | 1369.0 | 4107 | 1.2936 |
| 0.5596 | 1370.0 | 4110 | 1.3103 |
| 0.5596 | 1371.0 | 4113 | 1.3330 |
| 0.5596 | 1372.0 | 4116 | 1.3600 |
| 0.5596 | 1373.0 | 4119 | 1.3767 |
| 0.5596 | 1374.0 | 4122 | 1.3858 |
| 0.5596 | 1375.0 | 4125 | 1.3881 |
| 0.5596 | 1376.0 | 4128 | 1.4005 |
| 0.5596 | 1377.0 | 4131 | 1.4086 |
| 0.5596 | 1378.0 | 4134 | 1.4082 |
| 0.5596 | 1379.0 | 4137 | 1.4018 |
| 0.5596 | 1380.0 | 4140 | 1.3900 |
| 0.5596 | 1381.0 | 4143 | 1.3746 |
| 0.5596 | 1382.0 | 4146 | 1.3608 |
| 0.5596 | 1383.0 | 4149 | 1.3483 |
| 0.5596 | 1384.0 | 4152 | 1.3343 |
| 0.5596 | 1385.0 | 4155 | 1.3260 |
| 0.5596 | 1386.0 | 4158 | 1.3144 |
| 0.5596 | 1387.0 | 4161 | 1.3131 |
| 0.5596 | 1388.0 | 4164 | 1.3051 |
| 0.5596 | 1389.0 | 4167 | 1.2853 |
| 0.5596 | 1390.0 | 4170 | 1.2701 |
| 0.5596 | 1391.0 | 4173 | 1.2635 |
| 0.5596 | 1392.0 | 4176 | 1.2494 |
| 0.5596 | 1393.0 | 4179 | 1.2337 |
| 0.5596 | 1394.0 | 4182 | 1.2267 |
| 0.5596 | 1395.0 | 4185 | 1.2422 |
| 0.5596 | 1396.0 | 4188 | 1.2575 |
| 0.5596 | 1397.0 | 4191 | 1.2733 |
| 0.5596 | 1398.0 | 4194 | 1.2838 |
| 0.5596 | 1399.0 | 4197 | 1.2898 |
| 0.5596 | 1400.0 | 4200 | 1.2937 |
| 0.5596 | 1401.0 | 4203 | 1.2934 |
| 0.5596 | 1402.0 | 4206 | 1.2967 |
| 0.5596 | 1403.0 | 4209 | 1.2893 |
| 0.5596 | 1404.0 | 4212 | 1.2796 |
| 0.5596 | 1405.0 | 4215 | 1.2877 |
| 0.5596 | 1406.0 | 4218 | 1.3098 |
| 0.5596 | 1407.0 | 4221 | 1.3252 |
| 0.5596 | 1408.0 | 4224 | 1.3205 |
| 0.5596 | 1409.0 | 4227 | 1.3168 |
| 0.5596 | 1410.0 | 4230 | 1.3169 |
| 0.5596 | 1411.0 | 4233 | 1.3142 |
| 0.5596 | 1412.0 | 4236 | 1.2923 |
| 0.5596 | 1413.0 | 4239 | 1.2575 |
| 0.5596 | 1414.0 | 4242 | 1.2282 |
| 0.5596 | 1415.0 | 4245 | 1.2126 |
| 0.5596 | 1416.0 | 4248 | 1.2228 |
| 0.5596 | 1417.0 | 4251 | 1.2357 |
| 0.5596 | 1418.0 | 4254 | 1.2567 |
| 0.5596 | 1419.0 | 4257 | 1.2732 |
| 0.5596 | 1420.0 | 4260 | 1.2618 |
| 0.5596 | 1421.0 | 4263 | 1.2471 |
| 0.5596 | 1422.0 | 4266 | 1.2476 |
| 0.5596 | 1423.0 | 4269 | 1.2638 |
| 0.5596 | 1424.0 | 4272 | 1.3039 |
| 0.5596 | 1425.0 | 4275 | 1.3291 |
| 0.5596 | 1426.0 | 4278 | 1.3451 |
| 0.5596 | 1427.0 | 4281 | 1.3500 |
| 0.5596 | 1428.0 | 4284 | 1.3546 |
| 0.5596 | 1429.0 | 4287 | 1.3582 |
| 0.5596 | 1430.0 | 4290 | 1.3553 |
| 0.5596 | 1431.0 | 4293 | 1.3562 |
| 0.5596 | 1432.0 | 4296 | 1.3554 |
| 0.5596 | 1433.0 | 4299 | 1.3519 |
| 0.5596 | 1434.0 | 4302 | 1.3437 |
| 0.5596 | 1435.0 | 4305 | 1.3434 |
| 0.5596 | 1436.0 | 4308 | 1.3346 |
| 0.5596 | 1437.0 | 4311 | 1.3225 |
| 0.5596 | 1438.0 | 4314 | 1.3157 |
| 0.5596 | 1439.0 | 4317 | 1.3004 |
| 0.5596 | 1440.0 | 4320 | 1.2806 |
| 0.5596 | 1441.0 | 4323 | 1.2519 |
| 0.5596 | 1442.0 | 4326 | 1.2243 |
| 0.5596 | 1443.0 | 4329 | 1.2038 |
| 0.5596 | 1444.0 | 4332 | 1.1953 |
| 0.5596 | 1445.0 | 4335 | 1.1985 |
| 0.5596 | 1446.0 | 4338 | 1.2112 |
| 0.5596 | 1447.0 | 4341 | 1.2292 |
| 0.5596 | 1448.0 | 4344 | 1.2461 |
| 0.5596 | 1449.0 | 4347 | 1.2468 |
| 0.5596 | 1450.0 | 4350 | 1.2530 |
| 0.5596 | 1451.0 | 4353 | 1.2572 |
| 0.5596 | 1452.0 | 4356 | 1.2665 |
| 0.5596 | 1453.0 | 4359 | 1.2700 |
| 0.5596 | 1454.0 | 4362 | 1.2696 |
| 0.5596 | 1455.0 | 4365 | 1.2611 |
| 0.5596 | 1456.0 | 4368 | 1.2537 |
| 0.5596 | 1457.0 | 4371 | 1.2517 |
| 0.5596 | 1458.0 | 4374 | 1.2511 |
| 0.5596 | 1459.0 | 4377 | 1.2543 |
| 0.5596 | 1460.0 | 4380 | 1.2578 |
| 0.5596 | 1461.0 | 4383 | 1.2540 |
| 0.5596 | 1462.0 | 4386 | 1.2508 |
| 0.5596 | 1463.0 | 4389 | 1.2523 |
| 0.5596 | 1464.0 | 4392 | 1.2553 |
| 0.5596 | 1465.0 | 4395 | 1.2546 |
| 0.5596 | 1466.0 | 4398 | 1.2581 |
| 0.5596 | 1467.0 | 4401 | 1.2649 |
| 0.5596 | 1468.0 | 4404 | 1.2735 |
| 0.5596 | 1469.0 | 4407 | 1.2883 |
| 0.5596 | 1470.0 | 4410 | 1.3074 |
| 0.5596 | 1471.0 | 4413 | 1.3192 |
| 0.5596 | 1472.0 | 4416 | 1.3282 |
| 0.5596 | 1473.0 | 4419 | 1.3325 |
| 0.5596 | 1474.0 | 4422 | 1.3314 |
| 0.5596 | 1475.0 | 4425 | 1.3250 |
| 0.5596 | 1476.0 | 4428 | 1.3163 |
| 0.5596 | 1477.0 | 4431 | 1.3089 |
| 0.5596 | 1478.0 | 4434 | 1.3000 |
| 0.5596 | 1479.0 | 4437 | 1.3028 |
| 0.5596 | 1480.0 | 4440 | 1.3035 |
| 0.5596 | 1481.0 | 4443 | 1.3072 |
| 0.5596 | 1482.0 | 4446 | 1.3023 |
| 0.5596 | 1483.0 | 4449 | 1.3073 |
| 0.5596 | 1484.0 | 4452 | 1.3085 |
| 0.5596 | 1485.0 | 4455 | 1.3051 |
| 0.5596 | 1486.0 | 4458 | 1.3017 |
| 0.5596 | 1487.0 | 4461 | 1.2962 |
| 0.5596 | 1488.0 | 4464 | 1.2828 |
| 0.5596 | 1489.0 | 4467 | 1.2675 |
| 0.5596 | 1490.0 | 4470 | 1.2643 |
| 0.5596 | 1491.0 | 4473 | 1.2747 |
| 0.5596 | 1492.0 | 4476 | 1.2961 |
| 0.5596 | 1493.0 | 4479 | 1.3016 |
| 0.5596 | 1494.0 | 4482 | 1.2982 |
| 0.5596 | 1495.0 | 4485 | 1.2902 |
| 0.5596 | 1496.0 | 4488 | 1.2810 |
| 0.5596 | 1497.0 | 4491 | 1.2799 |
| 0.5596 | 1498.0 | 4494 | 1.2838 |
| 0.5596 | 1499.0 | 4497 | 1.2849 |
| 0.5585 | 1500.0 | 4500 | 1.2817 |
| 0.5585 | 1501.0 | 4503 | 1.2623 |
| 0.5585 | 1502.0 | 4506 | 1.2476 |
| 0.5585 | 1503.0 | 4509 | 1.2396 |
| 0.5585 | 1504.0 | 4512 | 1.2270 |
| 0.5585 | 1505.0 | 4515 | 1.2198 |
| 0.5585 | 1506.0 | 4518 | 1.2175 |
| 0.5585 | 1507.0 | 4521 | 1.2237 |
| 0.5585 | 1508.0 | 4524 | 1.2332 |
| 0.5585 | 1509.0 | 4527 | 1.2437 |
| 0.5585 | 1510.0 | 4530 | 1.2509 |
| 0.5585 | 1511.0 | 4533 | 1.2516 |
| 0.5585 | 1512.0 | 4536 | 1.2541 |
| 0.5585 | 1513.0 | 4539 | 1.2481 |
| 0.5585 | 1514.0 | 4542 | 1.2460 |
| 0.5585 | 1515.0 | 4545 | 1.2456 |
| 0.5585 | 1516.0 | 4548 | 1.2450 |
| 0.5585 | 1517.0 | 4551 | 1.2441 |
| 0.5585 | 1518.0 | 4554 | 1.2437 |
| 0.5585 | 1519.0 | 4557 | 1.2446 |
| 0.5585 | 1520.0 | 4560 | 1.2490 |
| 0.5585 | 1521.0 | 4563 | 1.2540 |
| 0.5585 | 1522.0 | 4566 | 1.2620 |
| 0.5585 | 1523.0 | 4569 | 1.2615 |
| 0.5585 | 1524.0 | 4572 | 1.2570 |
| 0.5585 | 1525.0 | 4575 | 1.2569 |
| 0.5585 | 1526.0 | 4578 | 1.2570 |
| 0.5585 | 1527.0 | 4581 | 1.2681 |
| 0.5585 | 1528.0 | 4584 | 1.2824 |
| 0.5585 | 1529.0 | 4587 | 1.2947 |
| 0.5585 | 1530.0 | 4590 | 1.2917 |
| 0.5585 | 1531.0 | 4593 | 1.2866 |
| 0.5585 | 1532.0 | 4596 | 1.2758 |
| 0.5585 | 1533.0 | 4599 | 1.2622 |
| 0.5585 | 1534.0 | 4602 | 1.2540 |
| 0.5585 | 1535.0 | 4605 | 1.2411 |
| 0.5585 | 1536.0 | 4608 | 1.2433 |
| 0.5585 | 1537.0 | 4611 | 1.2553 |
| 0.5585 | 1538.0 | 4614 | 1.2590 |
| 0.5585 | 1539.0 | 4617 | 1.2535 |
| 0.5585 | 1540.0 | 4620 | 1.2439 |
| 0.5585 | 1541.0 | 4623 | 1.2461 |
| 0.5585 | 1542.0 | 4626 | 1.2506 |
| 0.5585 | 1543.0 | 4629 | 1.2483 |
| 0.5585 | 1544.0 | 4632 | 1.2488 |
| 0.5585 | 1545.0 | 4635 | 1.2463 |
| 0.5585 | 1546.0 | 4638 | 1.2497 |
| 0.5585 | 1547.0 | 4641 | 1.2608 |
| 0.5585 | 1548.0 | 4644 | 1.2711 |
| 0.5585 | 1549.0 | 4647 | 1.2785 |
| 0.5585 | 1550.0 | 4650 | 1.2751 |
| 0.5585 | 1551.0 | 4653 | 1.2641 |
| 0.5585 | 1552.0 | 4656 | 1.2510 |
| 0.5585 | 1553.0 | 4659 | 1.2358 |
| 0.5585 | 1554.0 | 4662 | 1.2287 |
| 0.5585 | 1555.0 | 4665 | 1.2247 |
| 0.5585 | 1556.0 | 4668 | 1.2228 |
| 0.5585 | 1557.0 | 4671 | 1.2226 |
| 0.5585 | 1558.0 | 4674 | 1.2310 |
| 0.5585 | 1559.0 | 4677 | 1.2332 |
| 0.5585 | 1560.0 | 4680 | 1.2375 |
| 0.5585 | 1561.0 | 4683 | 1.2369 |
| 0.5585 | 1562.0 | 4686 | 1.2275 |
| 0.5585 | 1563.0 | 4689 | 1.2133 |
| 0.5585 | 1564.0 | 4692 | 1.1939 |
| 0.5585 | 1565.0 | 4695 | 1.1805 |
| 0.5585 | 1566.0 | 4698 | 1.1668 |
| 0.5585 | 1567.0 | 4701 | 1.1570 |
| 0.5585 | 1568.0 | 4704 | 1.1510 |
| 0.5585 | 1569.0 | 4707 | 1.1499 |
| 0.5585 | 1570.0 | 4710 | 1.1548 |
| 0.5585 | 1571.0 | 4713 | 1.1644 |
| 0.5585 | 1572.0 | 4716 | 1.1659 |
| 0.5585 | 1573.0 | 4719 | 1.1751 |
| 0.5585 | 1574.0 | 4722 | 1.1975 |
| 0.5585 | 1575.0 | 4725 | 1.2115 |
| 0.5585 | 1576.0 | 4728 | 1.2144 |
| 0.5585 | 1577.0 | 4731 | 1.2082 |
| 0.5585 | 1578.0 | 4734 | 1.1975 |
| 0.5585 | 1579.0 | 4737 | 1.1939 |
| 0.5585 | 1580.0 | 4740 | 1.1906 |
| 0.5585 | 1581.0 | 4743 | 1.1783 |
| 0.5585 | 1582.0 | 4746 | 1.1757 |
| 0.5585 | 1583.0 | 4749 | 1.1792 |
| 0.5585 | 1584.0 | 4752 | 1.1950 |
| 0.5585 | 1585.0 | 4755 | 1.2039 |
| 0.5585 | 1586.0 | 4758 | 1.2107 |
| 0.5585 | 1587.0 | 4761 | 1.2178 |
| 0.5585 | 1588.0 | 4764 | 1.2261 |
| 0.5585 | 1589.0 | 4767 | 1.2340 |
| 0.5585 | 1590.0 | 4770 | 1.2420 |
| 0.5585 | 1591.0 | 4773 | 1.2525 |
| 0.5585 | 1592.0 | 4776 | 1.2740 |
| 0.5585 | 1593.0 | 4779 | 1.2903 |
| 0.5585 | 1594.0 | 4782 | 1.2987 |
| 0.5585 | 1595.0 | 4785 | 1.2991 |
| 0.5585 | 1596.0 | 4788 | 1.2934 |
| 0.5585 | 1597.0 | 4791 | 1.2862 |
| 0.5585 | 1598.0 | 4794 | 1.2868 |
| 0.5585 | 1599.0 | 4797 | 1.2803 |
| 0.5585 | 1600.0 | 4800 | 1.2826 |
| 0.5585 | 1601.0 | 4803 | 1.2763 |
| 0.5585 | 1602.0 | 4806 | 1.2718 |
| 0.5585 | 1603.0 | 4809 | 1.2646 |
| 0.5585 | 1604.0 | 4812 | 1.2668 |
| 0.5585 | 1605.0 | 4815 | 1.2755 |
| 0.5585 | 1606.0 | 4818 | 1.2812 |
| 0.5585 | 1607.0 | 4821 | 1.2905 |
| 0.5585 | 1608.0 | 4824 | 1.2896 |
| 0.5585 | 1609.0 | 4827 | 1.2850 |
| 0.5585 | 1610.0 | 4830 | 1.2822 |
| 0.5585 | 1611.0 | 4833 | 1.2768 |
| 0.5585 | 1612.0 | 4836 | 1.2710 |
| 0.5585 | 1613.0 | 4839 | 1.2660 |
| 0.5585 | 1614.0 | 4842 | 1.2627 |
| 0.5585 | 1615.0 | 4845 | 1.2584 |
| 0.5585 | 1616.0 | 4848 | 1.2485 |
| 0.5585 | 1617.0 | 4851 | 1.2344 |
| 0.5585 | 1618.0 | 4854 | 1.2201 |
| 0.5585 | 1619.0 | 4857 | 1.2069 |
| 0.5585 | 1620.0 | 4860 | 1.1927 |
| 0.5585 | 1621.0 | 4863 | 1.1971 |
| 0.5585 | 1622.0 | 4866 | 1.2042 |
| 0.5585 | 1623.0 | 4869 | 1.2124 |
| 0.5585 | 1624.0 | 4872 | 1.2249 |
| 0.5585 | 1625.0 | 4875 | 1.2413 |
| 0.5585 | 1626.0 | 4878 | 1.2477 |
| 0.5585 | 1627.0 | 4881 | 1.2600 |
| 0.5585 | 1628.0 | 4884 | 1.2676 |
| 0.5585 | 1629.0 | 4887 | 1.2724 |
| 0.5585 | 1630.0 | 4890 | 1.2755 |
| 0.5585 | 1631.0 | 4893 | 1.2782 |
| 0.5585 | 1632.0 | 4896 | 1.2968 |
| 0.5585 | 1633.0 | 4899 | 1.3072 |
| 0.5585 | 1634.0 | 4902 | 1.3119 |
| 0.5585 | 1635.0 | 4905 | 1.3116 |
| 0.5585 | 1636.0 | 4908 | 1.3104 |
| 0.5585 | 1637.0 | 4911 | 1.3071 |
| 0.5585 | 1638.0 | 4914 | 1.3022 |
| 0.5585 | 1639.0 | 4917 | 1.2993 |
| 0.5585 | 1640.0 | 4920 | 1.2960 |
| 0.5585 | 1641.0 | 4923 | 1.2829 |
| 0.5585 | 1642.0 | 4926 | 1.2700 |
| 0.5585 | 1643.0 | 4929 | 1.2669 |
| 0.5585 | 1644.0 | 4932 | 1.2658 |
| 0.5585 | 1645.0 | 4935 | 1.2583 |
| 0.5585 | 1646.0 | 4938 | 1.2580 |
| 0.5585 | 1647.0 | 4941 | 1.2485 |
| 0.5585 | 1648.0 | 4944 | 1.2374 |
| 0.5585 | 1649.0 | 4947 | 1.2234 |
| 0.5585 | 1650.0 | 4950 | 1.2172 |
| 0.5585 | 1651.0 | 4953 | 1.2044 |
| 0.5585 | 1652.0 | 4956 | 1.1955 |
| 0.5585 | 1653.0 | 4959 | 1.1854 |
| 0.5585 | 1654.0 | 4962 | 1.1917 |
| 0.5585 | 1655.0 | 4965 | 1.1924 |
| 0.5585 | 1656.0 | 4968 | 1.1886 |
| 0.5585 | 1657.0 | 4971 | 1.1910 |
| 0.5585 | 1658.0 | 4974 | 1.1913 |
| 0.5585 | 1659.0 | 4977 | 1.1960 |
| 0.5585 | 1660.0 | 4980 | 1.2030 |
| 0.5585 | 1661.0 | 4983 | 1.2132 |
| 0.5585 | 1662.0 | 4986 | 1.2263 |
| 0.5585 | 1663.0 | 4989 | 1.2411 |
| 0.5585 | 1664.0 | 4992 | 1.2572 |
| 0.5585 | 1665.0 | 4995 | 1.2714 |
| 0.5585 | 1666.0 | 4998 | 1.2824 |
| 0.5584 | 1667.0 | 5001 | 1.2862 |
| 0.5584 | 1668.0 | 5004 | 1.2866 |
| 0.5584 | 1669.0 | 5007 | 1.2883 |
| 0.5584 | 1670.0 | 5010 | 1.2868 |
| 0.5584 | 1671.0 | 5013 | 1.2821 |
| 0.5584 | 1672.0 | 5016 | 1.2769 |
| 0.5584 | 1673.0 | 5019 | 1.2708 |
| 0.5584 | 1674.0 | 5022 | 1.2631 |
| 0.5584 | 1675.0 | 5025 | 1.2573 |
| 0.5584 | 1676.0 | 5028 | 1.2570 |
| 0.5584 | 1677.0 | 5031 | 1.2558 |
| 0.5584 | 1678.0 | 5034 | 1.2561 |
| 0.5584 | 1679.0 | 5037 | 1.2551 |
| 0.5584 | 1680.0 | 5040 | 1.2521 |
| 0.5584 | 1681.0 | 5043 | 1.2414 |
| 0.5584 | 1682.0 | 5046 | 1.2274 |
| 0.5584 | 1683.0 | 5049 | 1.2122 |
| 0.5584 | 1684.0 | 5052 | 1.1951 |
| 0.5584 | 1685.0 | 5055 | 1.1893 |
| 0.5584 | 1686.0 | 5058 | 1.1823 |
| 0.5584 | 1687.0 | 5061 | 1.1763 |
| 0.5584 | 1688.0 | 5064 | 1.1725 |
| 0.5584 | 1689.0 | 5067 | 1.1744 |
| 0.5584 | 1690.0 | 5070 | 1.1875 |
| 0.5584 | 1691.0 | 5073 | 1.1946 |
| 0.5584 | 1692.0 | 5076 | 1.2012 |
| 0.5584 | 1693.0 | 5079 | 1.2053 |
| 0.5584 | 1694.0 | 5082 | 1.2083 |
| 0.5584 | 1695.0 | 5085 | 1.2196 |
| 0.5584 | 1696.0 | 5088 | 1.2435 |
| 0.5584 | 1697.0 | 5091 | 1.2554 |
| 0.5584 | 1698.0 | 5094 | 1.2650 |
| 0.5584 | 1699.0 | 5097 | 1.2680 |
| 0.5584 | 1700.0 | 5100 | 1.2642 |
| 0.5584 | 1701.0 | 5103 | 1.2682 |
| 0.5584 | 1702.0 | 5106 | 1.2741 |
| 0.5584 | 1703.0 | 5109 | 1.2736 |
| 0.5584 | 1704.0 | 5112 | 1.2641 |
| 0.5584 | 1705.0 | 5115 | 1.2590 |
| 0.5584 | 1706.0 | 5118 | 1.2602 |
| 0.5584 | 1707.0 | 5121 | 1.2610 |
| 0.5584 | 1708.0 | 5124 | 1.2628 |
| 0.5584 | 1709.0 | 5127 | 1.2661 |
| 0.5584 | 1710.0 | 5130 | 1.2716 |
| 0.5584 | 1711.0 | 5133 | 1.2769 |
| 0.5584 | 1712.0 | 5136 | 1.2820 |
| 0.5584 | 1713.0 | 5139 | 1.2837 |
| 0.5584 | 1714.0 | 5142 | 1.2823 |
| 0.5584 | 1715.0 | 5145 | 1.2832 |
| 0.5584 | 1716.0 | 5148 | 1.2814 |
| 0.5584 | 1717.0 | 5151 | 1.2819 |
| 0.5584 | 1718.0 | 5154 | 1.2820 |
| 0.5584 | 1719.0 | 5157 | 1.2816 |
| 0.5584 | 1720.0 | 5160 | 1.2814 |
| 0.5584 | 1721.0 | 5163 | 1.2813 |
| 0.5584 | 1722.0 | 5166 | 1.2787 |
| 0.5584 | 1723.0 | 5169 | 1.2741 |
| 0.5584 | 1724.0 | 5172 | 1.2706 |
| 0.5584 | 1725.0 | 5175 | 1.2711 |
| 0.5584 | 1726.0 | 5178 | 1.2760 |
| 0.5584 | 1727.0 | 5181 | 1.2812 |
| 0.5584 | 1728.0 | 5184 | 1.2847 |
| 0.5584 | 1729.0 | 5187 | 1.2863 |
| 0.5584 | 1730.0 | 5190 | 1.2881 |
| 0.5584 | 1731.0 | 5193 | 1.2861 |
| 0.5584 | 1732.0 | 5196 | 1.2846 |
| 0.5584 | 1733.0 | 5199 | 1.2825 |
| 0.5584 | 1734.0 | 5202 | 1.2793 |
| 0.5584 | 1735.0 | 5205 | 1.2799 |
| 0.5584 | 1736.0 | 5208 | 1.2794 |
| 0.5584 | 1737.0 | 5211 | 1.2769 |
| 0.5584 | 1738.0 | 5214 | 1.2734 |
| 0.5584 | 1739.0 | 5217 | 1.2713 |
| 0.5584 | 1740.0 | 5220 | 1.2720 |
| 0.5584 | 1741.0 | 5223 | 1.2751 |
| 0.5584 | 1742.0 | 5226 | 1.2776 |
| 0.5584 | 1743.0 | 5229 | 1.2792 |
| 0.5584 | 1744.0 | 5232 | 1.2830 |
| 0.5584 | 1745.0 | 5235 | 1.2845 |
| 0.5584 | 1746.0 | 5238 | 1.2858 |
| 0.5584 | 1747.0 | 5241 | 1.2844 |
| 0.5584 | 1748.0 | 5244 | 1.2823 |
| 0.5584 | 1749.0 | 5247 | 1.2819 |
| 0.5584 | 1750.0 | 5250 | 1.2809 |
| 0.5584 | 1751.0 | 5253 | 1.2805 |
| 0.5584 | 1752.0 | 5256 | 1.2779 |
| 0.5584 | 1753.0 | 5259 | 1.2749 |
| 0.5584 | 1754.0 | 5262 | 1.2768 |
| 0.5584 | 1755.0 | 5265 | 1.2799 |
| 0.5584 | 1756.0 | 5268 | 1.2808 |
| 0.5584 | 1757.0 | 5271 | 1.2788 |
| 0.5584 | 1758.0 | 5274 | 1.2726 |
| 0.5584 | 1759.0 | 5277 | 1.2663 |
| 0.5584 | 1760.0 | 5280 | 1.2611 |
| 0.5584 | 1761.0 | 5283 | 1.2576 |
| 0.5584 | 1762.0 | 5286 | 1.2551 |
| 0.5584 | 1763.0 | 5289 | 1.2647 |
| 0.5584 | 1764.0 | 5292 | 1.2732 |
| 0.5584 | 1765.0 | 5295 | 1.2749 |
| 0.5584 | 1766.0 | 5298 | 1.2798 |
| 0.5584 | 1767.0 | 5301 | 1.2798 |
| 0.5584 | 1768.0 | 5304 | 1.2799 |
| 0.5584 | 1769.0 | 5307 | 1.2805 |
| 0.5584 | 1770.0 | 5310 | 1.2787 |
| 0.5584 | 1771.0 | 5313 | 1.2751 |
| 0.5584 | 1772.0 | 5316 | 1.2724 |
| 0.5584 | 1773.0 | 5319 | 1.2702 |
| 0.5584 | 1774.0 | 5322 | 1.2681 |
| 0.5584 | 1775.0 | 5325 | 1.2680 |
| 0.5584 | 1776.0 | 5328 | 1.2762 |
| 0.5584 | 1777.0 | 5331 | 1.2824 |
| 0.5584 | 1778.0 | 5334 | 1.2878 |
| 0.5584 | 1779.0 | 5337 | 1.2896 |
| 0.5584 | 1780.0 | 5340 | 1.2924 |
| 0.5584 | 1781.0 | 5343 | 1.2972 |
| 0.5584 | 1782.0 | 5346 | 1.2993 |
| 0.5584 | 1783.0 | 5349 | 1.2992 |
| 0.5584 | 1784.0 | 5352 | 1.2982 |
| 0.5584 | 1785.0 | 5355 | 1.2968 |
| 0.5584 | 1786.0 | 5358 | 1.2951 |
| 0.5584 | 1787.0 | 5361 | 1.2933 |
| 0.5584 | 1788.0 | 5364 | 1.2933 |
| 0.5584 | 1789.0 | 5367 | 1.2916 |
| 0.5584 | 1790.0 | 5370 | 1.2882 |
| 0.5584 | 1791.0 | 5373 | 1.2879 |
| 0.5584 | 1792.0 | 5376 | 1.2876 |
| 0.5584 | 1793.0 | 5379 | 1.2848 |
| 0.5584 | 1794.0 | 5382 | 1.2832 |
| 0.5584 | 1795.0 | 5385 | 1.2809 |
| 0.5584 | 1796.0 | 5388 | 1.2803 |
| 0.5584 | 1797.0 | 5391 | 1.2786 |
| 0.5584 | 1798.0 | 5394 | 1.2740 |
| 0.5584 | 1799.0 | 5397 | 1.2691 |
| 0.5584 | 1800.0 | 5400 | 1.2653 |
| 0.5584 | 1801.0 | 5403 | 1.2605 |
| 0.5584 | 1802.0 | 5406 | 1.2591 |
| 0.5584 | 1803.0 | 5409 | 1.2564 |
| 0.5584 | 1804.0 | 5412 | 1.2520 |
| 0.5584 | 1805.0 | 5415 | 1.2478 |
| 0.5584 | 1806.0 | 5418 | 1.2489 |
| 0.5584 | 1807.0 | 5421 | 1.2499 |
| 0.5584 | 1808.0 | 5424 | 1.2530 |
| 0.5584 | 1809.0 | 5427 | 1.2525 |
| 0.5584 | 1810.0 | 5430 | 1.2523 |
| 0.5584 | 1811.0 | 5433 | 1.2526 |
| 0.5584 | 1812.0 | 5436 | 1.2536 |
| 0.5584 | 1813.0 | 5439 | 1.2507 |
| 0.5584 | 1814.0 | 5442 | 1.2481 |
| 0.5584 | 1815.0 | 5445 | 1.2451 |
| 0.5584 | 1816.0 | 5448 | 1.2370 |
| 0.5584 | 1817.0 | 5451 | 1.2326 |
| 0.5584 | 1818.0 | 5454 | 1.2316 |
| 0.5584 | 1819.0 | 5457 | 1.2329 |
| 0.5584 | 1820.0 | 5460 | 1.2352 |
| 0.5584 | 1821.0 | 5463 | 1.2331 |
| 0.5584 | 1822.0 | 5466 | 1.2283 |
| 0.5584 | 1823.0 | 5469 | 1.2228 |
| 0.5584 | 1824.0 | 5472 | 1.2207 |
| 0.5584 | 1825.0 | 5475 | 1.2197 |
| 0.5584 | 1826.0 | 5478 | 1.2164 |
| 0.5584 | 1827.0 | 5481 | 1.2152 |
| 0.5584 | 1828.0 | 5484 | 1.2172 |
| 0.5584 | 1829.0 | 5487 | 1.2181 |
| 0.5584 | 1830.0 | 5490 | 1.2158 |
| 0.5584 | 1831.0 | 5493 | 1.2166 |
| 0.5584 | 1832.0 | 5496 | 1.2138 |
| 0.5584 | 1833.0 | 5499 | 1.2109 |
| 0.5585 | 1834.0 | 5502 | 1.2170 |
| 0.5585 | 1835.0 | 5505 | 1.2216 |
| 0.5585 | 1836.0 | 5508 | 1.2244 |
| 0.5585 | 1837.0 | 5511 | 1.2267 |
| 0.5585 | 1838.0 | 5514 | 1.2321 |
| 0.5585 | 1839.0 | 5517 | 1.2359 |
| 0.5585 | 1840.0 | 5520 | 1.2415 |
| 0.5585 | 1841.0 | 5523 | 1.2507 |
| 0.5585 | 1842.0 | 5526 | 1.2623 |
| 0.5585 | 1843.0 | 5529 | 1.2675 |
| 0.5585 | 1844.0 | 5532 | 1.2701 |
| 0.5585 | 1845.0 | 5535 | 1.2701 |
| 0.5585 | 1846.0 | 5538 | 1.2698 |
| 0.5585 | 1847.0 | 5541 | 1.2720 |
| 0.5585 | 1848.0 | 5544 | 1.2740 |
| 0.5585 | 1849.0 | 5547 | 1.2751 |
| 0.5585 | 1850.0 | 5550 | 1.2771 |
| 0.5585 | 1851.0 | 5553 | 1.2801 |
| 0.5585 | 1852.0 | 5556 | 1.2817 |
| 0.5585 | 1853.0 | 5559 | 1.2834 |
| 0.5585 | 1854.0 | 5562 | 1.2851 |
| 0.5585 | 1855.0 | 5565 | 1.2870 |
| 0.5585 | 1856.0 | 5568 | 1.2885 |
| 0.5585 | 1857.0 | 5571 | 1.2872 |
| 0.5585 | 1858.0 | 5574 | 1.2855 |
| 0.5585 | 1859.0 | 5577 | 1.2835 |
| 0.5585 | 1860.0 | 5580 | 1.2837 |
| 0.5585 | 1861.0 | 5583 | 1.2837 |
| 0.5585 | 1862.0 | 5586 | 1.2828 |
| 0.5585 | 1863.0 | 5589 | 1.2814 |
| 0.5585 | 1864.0 | 5592 | 1.2794 |
| 0.5585 | 1865.0 | 5595 | 1.2781 |
| 0.5585 | 1866.0 | 5598 | 1.2806 |
| 0.5585 | 1867.0 | 5601 | 1.2827 |
| 0.5585 | 1868.0 | 5604 | 1.2827 |
| 0.5585 | 1869.0 | 5607 | 1.2828 |
| 0.5585 | 1870.0 | 5610 | 1.2827 |
| 0.5585 | 1871.0 | 5613 | 1.2810 |
| 0.5585 | 1872.0 | 5616 | 1.2799 |
| 0.5585 | 1873.0 | 5619 | 1.2784 |
| 0.5585 | 1874.0 | 5622 | 1.2760 |
| 0.5585 | 1875.0 | 5625 | 1.2729 |
| 0.5585 | 1876.0 | 5628 | 1.2710 |
| 0.5585 | 1877.0 | 5631 | 1.2718 |
| 0.5585 | 1878.0 | 5634 | 1.2747 |
| 0.5585 | 1879.0 | 5637 | 1.2779 |
| 0.5585 | 1880.0 | 5640 | 1.2808 |
| 0.5585 | 1881.0 | 5643 | 1.2827 |
| 0.5585 | 1882.0 | 5646 | 1.2821 |
| 0.5585 | 1883.0 | 5649 | 1.2822 |
| 0.5585 | 1884.0 | 5652 | 1.2834 |
| 0.5585 | 1885.0 | 5655 | 1.2828 |
| 0.5585 | 1886.0 | 5658 | 1.2808 |
| 0.5585 | 1887.0 | 5661 | 1.2784 |
| 0.5585 | 1888.0 | 5664 | 1.2760 |
| 0.5585 | 1889.0 | 5667 | 1.2731 |
| 0.5585 | 1890.0 | 5670 | 1.2704 |
| 0.5585 | 1891.0 | 5673 | 1.2704 |
| 0.5585 | 1892.0 | 5676 | 1.2701 |
| 0.5585 | 1893.0 | 5679 | 1.2696 |
| 0.5585 | 1894.0 | 5682 | 1.2657 |
| 0.5585 | 1895.0 | 5685 | 1.2590 |
| 0.5585 | 1896.0 | 5688 | 1.2525 |
| 0.5585 | 1897.0 | 5691 | 1.2475 |
| 0.5585 | 1898.0 | 5694 | 1.2441 |
| 0.5585 | 1899.0 | 5697 | 1.2416 |
| 0.5585 | 1900.0 | 5700 | 1.2422 |
| 0.5585 | 1901.0 | 5703 | 1.2433 |
| 0.5585 | 1902.0 | 5706 | 1.2443 |
| 0.5585 | 1903.0 | 5709 | 1.2453 |
| 0.5585 | 1904.0 | 5712 | 1.2513 |
| 0.5585 | 1905.0 | 5715 | 1.2538 |
| 0.5585 | 1906.0 | 5718 | 1.2554 |
| 0.5585 | 1907.0 | 5721 | 1.2567 |
| 0.5585 | 1908.0 | 5724 | 1.2573 |
| 0.5585 | 1909.0 | 5727 | 1.2580 |
| 0.5585 | 1910.0 | 5730 | 1.2579 |
| 0.5585 | 1911.0 | 5733 | 1.2576 |
| 0.5585 | 1912.0 | 5736 | 1.2567 |
| 0.5585 | 1913.0 | 5739 | 1.2552 |
| 0.5585 | 1914.0 | 5742 | 1.2542 |
| 0.5585 | 1915.0 | 5745 | 1.2539 |
| 0.5585 | 1916.0 | 5748 | 1.2530 |
| 0.5585 | 1917.0 | 5751 | 1.2534 |
| 0.5585 | 1918.0 | 5754 | 1.2542 |
| 0.5585 | 1919.0 | 5757 | 1.2537 |
| 0.5585 | 1920.0 | 5760 | 1.2527 |
| 0.5585 | 1921.0 | 5763 | 1.2517 |
| 0.5585 | 1922.0 | 5766 | 1.2510 |
| 0.5585 | 1923.0 | 5769 | 1.2496 |
| 0.5585 | 1924.0 | 5772 | 1.2497 |
| 0.5585 | 1925.0 | 5775 | 1.2491 |
| 0.5585 | 1926.0 | 5778 | 1.2483 |
| 0.5585 | 1927.0 | 5781 | 1.2462 |
| 0.5585 | 1928.0 | 5784 | 1.2437 |
| 0.5585 | 1929.0 | 5787 | 1.2406 |
| 0.5585 | 1930.0 | 5790 | 1.2390 |
| 0.5585 | 1931.0 | 5793 | 1.2390 |
| 0.5585 | 1932.0 | 5796 | 1.2390 |
| 0.5585 | 1933.0 | 5799 | 1.2409 |
| 0.5585 | 1934.0 | 5802 | 1.2442 |
| 0.5585 | 1935.0 | 5805 | 1.2473 |
| 0.5585 | 1936.0 | 5808 | 1.2490 |
| 0.5585 | 1937.0 | 5811 | 1.2516 |
| 0.5585 | 1938.0 | 5814 | 1.2542 |
| 0.5585 | 1939.0 | 5817 | 1.2565 |
| 0.5585 | 1940.0 | 5820 | 1.2594 |
| 0.5585 | 1941.0 | 5823 | 1.2610 |
| 0.5585 | 1942.0 | 5826 | 1.2623 |
| 0.5585 | 1943.0 | 5829 | 1.2636 |
| 0.5585 | 1944.0 | 5832 | 1.2657 |
| 0.5585 | 1945.0 | 5835 | 1.2667 |
| 0.5585 | 1946.0 | 5838 | 1.2676 |
| 0.5585 | 1947.0 | 5841 | 1.2685 |
| 0.5585 | 1948.0 | 5844 | 1.2696 |
| 0.5585 | 1949.0 | 5847 | 1.2707 |
| 0.5585 | 1950.0 | 5850 | 1.2707 |
| 0.5585 | 1951.0 | 5853 | 1.2710 |
| 0.5585 | 1952.0 | 5856 | 1.2707 |
| 0.5585 | 1953.0 | 5859 | 1.2694 |
| 0.5585 | 1954.0 | 5862 | 1.2673 |
| 0.5585 | 1955.0 | 5865 | 1.2650 |
| 0.5585 | 1956.0 | 5868 | 1.2625 |
| 0.5585 | 1957.0 | 5871 | 1.2614 |
| 0.5585 | 1958.0 | 5874 | 1.2605 |
| 0.5585 | 1959.0 | 5877 | 1.2599 |
| 0.5585 | 1960.0 | 5880 | 1.2599 |
| 0.5585 | 1961.0 | 5883 | 1.2598 |
| 0.5585 | 1962.0 | 5886 | 1.2585 |
| 0.5585 | 1963.0 | 5889 | 1.2572 |
| 0.5585 | 1964.0 | 5892 | 1.2555 |
| 0.5585 | 1965.0 | 5895 | 1.2527 |
| 0.5585 | 1966.0 | 5898 | 1.2513 |
| 0.5585 | 1967.0 | 5901 | 1.2504 |
| 0.5585 | 1968.0 | 5904 | 1.2508 |
| 0.5585 | 1969.0 | 5907 | 1.2511 |
| 0.5585 | 1970.0 | 5910 | 1.2517 |
| 0.5585 | 1971.0 | 5913 | 1.2528 |
| 0.5585 | 1972.0 | 5916 | 1.2537 |
| 0.5585 | 1973.0 | 5919 | 1.2543 |
| 0.5585 | 1974.0 | 5922 | 1.2549 |
| 0.5585 | 1975.0 | 5925 | 1.2554 |
| 0.5585 | 1976.0 | 5928 | 1.2554 |
| 0.5585 | 1977.0 | 5931 | 1.2555 |
| 0.5585 | 1978.0 | 5934 | 1.2554 |
| 0.5585 | 1979.0 | 5937 | 1.2553 |
| 0.5585 | 1980.0 | 5940 | 1.2554 |
| 0.5585 | 1981.0 | 5943 | 1.2556 |
| 0.5585 | 1982.0 | 5946 | 1.2563 |
| 0.5585 | 1983.0 | 5949 | 1.2567 |
| 0.5585 | 1984.0 | 5952 | 1.2567 |
| 0.5585 | 1985.0 | 5955 | 1.2567 |
| 0.5585 | 1986.0 | 5958 | 1.2566 |
| 0.5585 | 1987.0 | 5961 | 1.2566 |
| 0.5585 | 1988.0 | 5964 | 1.2564 |
| 0.5585 | 1989.0 | 5967 | 1.2563 |
| 0.5585 | 1990.0 | 5970 | 1.2564 |
| 0.5585 | 1991.0 | 5973 | 1.2564 |
| 0.5585 | 1992.0 | 5976 | 1.2564 |
| 0.5585 | 1993.0 | 5979 | 1.2565 |
| 0.5585 | 1994.0 | 5982 | 1.2565 |
| 0.5585 | 1995.0 | 5985 | 1.2564 |
| 0.5585 | 1996.0 | 5988 | 1.2563 |
| 0.5585 | 1997.0 | 5991 | 1.2563 |
| 0.5585 | 1998.0 | 5994 | 1.2562 |
| 0.5585 | 1999.0 | 5997 | 1.2562 |
| 0.558 | 2000.0 | 6000 | 1.2562 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
ap08/bert_custom-squad
|
ap08
| 2024-03-07T17:19:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T17:16:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alpaca69B/gemma-2b-absa-3epoches
|
Alpaca69B
| 2024-03-07T17:18:40Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T15:29:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VAGOsolutions/SauerkrautLM-Gemma-2b
|
VAGOsolutions
| 2024-03-07T17:17:08Z | 126 | 8 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"sft",
"laserRMT",
"laser-QLoRa",
"finetune",
"work in progress",
"alpha",
"de",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T23:26:18Z |
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- de
- en
tags:
- sft
- laserRMT
- laser-QLoRa
- finetune
- work in progress
- alpha
---

## VAGO solutions SauerkrautLM-Gemma-2b (alpha)
Introducing **SauerkrautLM-Gemma-2b** – our German Sauerkraut version of the powerful [google/gemma-2b](https://huggingface.co/google/gemma-2b) !
**It is an early stage finetuned model and should be used with caution!**
The model **SauerkrautLM-Gemma-2b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.**
Much appreciation goes to the tremendous research effort of **Fernando Fernandes Neto, David Golchinfar and Eric Hartford on their laserRMT approach.**
Without their independent research collaboration this model release would not have been possible.
- Fintuned with **SFT**
- **Using a novel training technique: laser-QLoRA** - we partially freeze the model according to a laser-like analysis (Official Paper soon). It allows to evaluate the no free lunch theorem and supports better decision making when optimizing the theorem - created by the [LaserRMT research group](https://github.com/cognitivecomputations/laserRMT)
- Optimized with **LaserRMT**
# Table of Contents
1. [Overview of all SauerkrautLM-Gemma-2b models](#all-sauerkrautlm-gemma-7b-models)
2. [Model Details](#model-details)
- [Prompt template](#prompt-template)
- [Training procedure](#proceed-of-the-training)
3. [Evaluation](#evaluation)
5. [Disclaimer](#disclaimer)
6. [Contact](#contact)
7. [Collaborations](#collaborations)
8. [Acknowledgement](#acknowledgement)
## All SauerkrautLM-Gemma-2b Models
| Model | HF | GPTQ | GGUF | AWQ |
|-------|-------|-------|-------|-------|
| SauerkrautLM-Gemma-2b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | coming soon | coming soon | coming soon |
## Model Details
**SauerkrautLM-Gemma-2b**
- **Model Type:** SauerkrautLM-Gemma-2b is a finetuned Model based on [google/gemma-2b](https://huggingface.co/google/gemma-2b)
- **Language(s):** German, English
- **License:** [gemma-terms-of-use](https://ai.google.dev/gemma/terms)
- **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/)
### Training procedure:
**Warning**: **This finetuned model is in an early stage and we sometimes observed strange behavior. It is still work in progress!**
Anyone who has attempted or succeeded in fine-tuning a model is aware of the difficulty in nudging it towards a specific skill, such as mastering new languages, as well as the challenges associated with achieving significant improvements in performance.
Experimenting with a novel training strategy and Spherical Linear Interpolation alongside a lasered version of the model itself has proven to be both fascinating and revealing.
Furthermore, we developed one iteration of the model using our entire SFT -Sauerkraut dataset and two additional iterations using subsets of the full dataset—one focused on enhancing MMLU and TQA capabilities, and the other on boosting GSM8K and Winogrande skills.
We actively monitor and assesed the results of each training. Whenever we found a decrease in perplexity on the gsm8k benchmark we intervined. By following this procedure we were able to improve the overall performance, especially in math abilities, without detracting from performance on other benchmarks—a task that is, in general, quite difficult.
This process not only helps in understanding the effectiveness of Spherical Linear Interpolation but also introduces a new method for refining models with enhanced skills through a cycle of targeted data selection (Laser data(x)) + SLERP, followed by a subsequent focus on different data (Laser again on data(y)).
Additionally, we integrated a novel training strategy on the SFT training process, where we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of language models forgetting previously acquired knowledge.
This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where in general, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence.
Detailed information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group.
**We teached German language skills on this model.** As far as we know, it is the first Gemma-2b model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).
### Prompt Template:
We trained on vicuna prompt template. Please add the following stopping string to your client: ``` "</s>","</p>" ``` (we did not add the special tokens to the training config)
```
You are a helpful AI Assistant.
USER: Hello, how are you?
ASSISTANT:
```
## Evaluation
(with lm-evaluation-harness 0.4.1)
**Open LLM Leaderboard:**
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | **48.93** |
| ARC (25-shot) | 49.32 |
| HellaSwag (10-shot) | 71.23 |
| MMLU (5-shot) | 42.06
| TruthfulQA (0-shot) | 35.73 |
| Winogrande (5-shot) | 67.56 |
| GSM8K (5-shot) | 27.67 |
**Performance**
| Model |AGIEval|GPT4All|TruthfulQA|BigBench|Average ⬇️|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[VAGOsolutions/SauerkrautLM-Gemma-7b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-7b) | 37.5| 72.46| 61.24| 45.33| 54.13|
|[zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 37.52| 71.77| 55.26| 39.77| 51.08|
|[zephyr-7b-gemma-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-7b-gemma-v0.1)| 34.22| 66.37| 52.19| 37.10| 47.47|
|[VAGOsolutions/SauerkrautLM-Gemma-2b](https://huggingface.co/VAGOsolutions/SauerkrautLM-Gemma-2b) | 24.28| 63.59| 35.73| 22.77| 36.59|
|[google/gemma-7b-it](https://huggingface.co/google/gemma-7b-it) | 21.33| 40.84| 41.70| 30.25| 33.53|
<details><summary>Details of AGIEval, GPT4All, TruthfulQA, BigBench </summary>
**AGIEval**
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|------------------------------|------:|------|------|--------|-----:|---|-----:|
|agieval_sat_math | 1|none |None |acc |0.2409|± |0.0289|
| | |none |None |acc_norm|0.2455|± |0.0291|
|agieval_sat_en_without_passage| 1|none |None |acc |0.3010|± |0.0320|
| | |none |None |acc_norm|0.2816|± |0.0314|
|agieval_sat_en | 1|none |None |acc |0.3301|± |0.0328|
| | |none |None |acc_norm|0.2961|± |0.0319|
|agieval_lsat_rc | 1|none |None |acc |0.2007|± |0.0245|
| | |none |None |acc_norm|0.1933|± |0.0241|
|agieval_lsat_lr | 1|none |None |acc |0.1941|± |0.0175|
| | |none |None |acc_norm|0.2039|± |0.0179|
|agieval_lsat_ar | 1|none |None |acc |0.2304|± |0.0278|
| | |none |None |acc_norm|0.2391|± |0.0282|
|agieval_logiqa_en | 1|none |None |acc |0.2089|± |0.0159|
| | |none |None |acc_norm|0.2581|± |0.0172|
|agieval_aqua_rat | 1|none |None |acc |0.2480|± |0.0272|
| | |none |None |acc_norm|0.2244|± |0.0262|
Average: 24.28%
**GPT4All**
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|------:|------|------|--------|-----:|---|-----:|
|arc_challenge| 1|none |None |acc |0.4334|± |0.0145|
| | |none |None |acc_norm|0.4309|± |0.0145|
|arc_easy | 1|none |None |acc |0.7433|± |0.0090|
| | |none |None |acc_norm|0.7264|± |0.0091|
|boolq | 2|none |None |acc |0.7165|± |0.0079|
|hellaswag | 1|none |None |acc |0.5357|± |0.0050|
| | |none |None |acc_norm|0.7158|± |0.0045|
|openbookqa | 1|none |None |acc |0.318 |± |0.0208|
| | |none |None |acc_norm|0.402 |± |0.0219|
|piqa | 1|none |None |acc |0.7709|± |0.0098|
| | |none |None |acc_norm|0.7807|± |0.0097|
|winogrande | 1|none |None |acc |0.6788|± |0.0131|
Average: 63.59%
**TruthfulQA**
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|------:|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2| 2|none | 0|acc |0.3573|± |0.0135|
Average: 35.73%
**Bigbench**
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------|------:|----------------|-----:|-----------|-----:|---|-----:|
|bbh_zeroshot_tracking_shuffled_objects_three_objects| 2|flexible-extract| 0|exact_match|0.3280|± |0.0298|
|bbh_zeroshot_tracking_shuffled_objects_seven_objects| 2|flexible-extract| 0|exact_match|0.1120|± |0.0200|
|bbh_zeroshot_tracking_shuffled_objects_five_objects | 2|flexible-extract| 0|exact_match|0.1520|± |0.0228|
|bbh_zeroshot_temporal_sequences | 2|flexible-extract| 0|exact_match|0.1000|± |0.0190|
|bbh_zeroshot_sports_understanding | 2|flexible-extract| 0|exact_match|0.5360|± |0.0316|
|bbh_zeroshot_snarks | 2|flexible-extract| 0|exact_match|0.2753|± |0.0336|
|bbh_zeroshot_salient_translation_error_detection | 2|flexible-extract| 0|exact_match|0.1400|± |0.0220|
|bbh_zeroshot_ruin_names | 2|flexible-extract| 0|exact_match|0.1120|± |0.0200|
|bbh_zeroshot_reasoning_about_colored_objects | 2|flexible-extract| 0|exact_match|0.1080|± |0.0197|
|bbh_zeroshot_navigate | 2|flexible-extract| 0|exact_match|0.5800|± |0.0313|
|bbh_zeroshot_movie_recommendation | 2|flexible-extract| 0|exact_match|0.4360|± |0.0314|
|bbh_zeroshot_logical_deduction_three_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000|
|bbh_zeroshot_logical_deduction_seven_objects | 2|flexible-extract| 0|exact_match|0.0720|± |0.0164|
|bbh_zeroshot_logical_deduction_five_objects | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000|
|bbh_zeroshot_geometric_shapes | 2|flexible-extract| 0|exact_match|0.0000|± |0.0000|
|bbh_zeroshot_disambiguation_qa | 2|flexible-extract| 0|exact_match|0.3400|± |0.0300|
|bbh_zeroshot_date_understanding | 2|flexible-extract| 0|exact_match|0.3360|± |0.0299|
|bbh_zeroshot_causal_judgement | 2|flexible-extract| 0|exact_match|0.4706|± |0.0366|
Average: 22.77%
</details>
## Disclaimer
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
## Contact
If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.
## Collaborations
We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/)
## Acknowledgement
Many thanks to [google](https://huggingface.co/google) for providing such valuable model to the Open-Source community
|
Humaid-alblooshi/bert-pretrained-base-5-epoch
|
Humaid-alblooshi
| 2024-03-07T17:14:47Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T17:14:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Koleshjr/final_updated_model
|
Koleshjr
| 2024-03-07T17:13:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T15:26:06Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** Koleshjr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
s14pe/ppo-SnowballTarget
|
s14pe
| 2024-03-07T17:10:01Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-03-07T17:09:57Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: s14pe/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
danlund4/q-FrozenLake-v1-4x4-noSlippery
|
danlund4
| 2024-03-07T17:08:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T17:08:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="danlund4/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
justdimaa/ppo-SnowballTarget
|
justdimaa
| 2024-03-07T17:04:47Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-03-07T17:04:42Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: justdimaa/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
guirrock/gemma-Finetune-bloom-taxonomy
|
guirrock
| 2024-03-07T17:01:39Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T20:04:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Humaid-alblooshi/bert-test-5-epoch
|
Humaid-alblooshi
| 2024-03-07T16:57:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-02T15:27:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vishalp23/distilbert-subject-classifier
|
vishalp23
| 2024-03-07T16:53:03Z | 0 | 1 | null |
[
"bert",
"subject-classification",
"text-classification",
"en",
"arxiv:1910.09700",
"arxiv:2105.09680",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-03-07T16:46:04Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
tags:
- bert
- subject-classification
- text-classification
---
# Subject Classifier built on Distilbert
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation)
- [Environmental Impact](#environmental-impact)
## Model Details
**Model Description:** This is the [uncased DistilBERT model](https://huggingface.co/distilbert-base-uncased) fine-tuned on a custom dataset that is built on the [IITJEE NEET AIIMS Students Questions Data](https://www.kaggle.com/datasets/mrutyunjaybiswal/iitjee-neet-aims-students-questions-data?resource=download) for the subject classification task.
- **Developed by:** The [Typeform](https://www.typeform.com/) team.
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** GNU GENERAL PUBLIC LICENSE
- **Parent Model:** See the [distilbert base uncased model](https://huggingface.co/distilbert-base-uncased) for more information about the Distilled-BERT base model.
## Uses
This model can be used for text classification tasks.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
## Training
Training is done on a [NVIDIA RTX 3070](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3070-3070ti/) [AMD Ryzen 7 5800](https://www.amd.com/en/products/cpu/amd-ryzen-7-5800) with the following hyperparameters:
```
$ training.ipynb \
--model_name_or_path distilbert-base-uncased \
--do_train \
--do_eval \
--max_seq_length 512 \
--per_device_train_batch_size 4 \
--learning_rate 1e-05 \
--num_train_epochs 5 \
```
## Evaluation
#### Evaluation Results
When fine-tuned on downstream tasks, this model achieves the following results:
Epochs: 5 | Train Loss: 0.001 | Train Accuracy: 0.989 | Val Loss: 0.006 | Val Accuracy: 0.950
CPU times: user 18h 19min 13s, sys: 1min 34s, total: 18h 20min 47s
Wall time: 18h 20min 7s
- **Epoch = ** 5.0
- **Evaluation Accuracy =** 0.950
- **Evaluation Loss =** 0.006
- **Training Accuracy =** 0.989
- **Training Loss =** 0.001
#### Testing Results
| | precision | recall | f1-score | support |
|-----------------|-----------|--------|----------|---------|
| biology | 0.98 | 0.99 | 0.99 | 15988 |
| chemistry | 1.00 | 0.99 | 0.99 | 20678 |
| computer | 1.00 | 0.99 | 0.99 | 8754 |
| maths | 1.00 | 1.00 | 1.00 | 26661 |
| physics | 0.99 | 0.98 | 0.99 | 10306 |
| social sciences | 0.99 | 1.00 | 0.99 | 25695 |
| | | | | |
| accuracy | 0.99 | 108082 | | |
| macro avg | 0.99 | 0.99 | 0.99 | 108082 |
| weighted avg | 0.99 | 0.99 | 0.99 | 108082 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf).
**Hardware Type:** 1 NVIDIA RTX 3070
**Hours used:** 18h 19min 13s
**Carbon Emitted:** (Power consumption x Time x Carbon produced based on location of power grid): Unknown
|
JoniJoniAl/testsmall7maart
|
JoniJoniAl
| 2024-03-07T16:51:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T10:04:01Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** JoniJoniAl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rajesh2004/text-to-image-ai-model
|
Rajesh2004
| 2024-03-07T16:42:07Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-07T16:37:57Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Text-to-Image-AI-Model Dreambooth model trained by Rajesh2004 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: AEC730222243020
Sample pictures of this concept:
.jpeg)
.jpeg)
.jpg)
.jpeg)
.jpeg)
.jpg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpg)
.jpg)
.jpeg)
.jpeg)
.jpg)
.jpg)
.png)
.jpg)
.jpeg)
.jpg)
.jpg)
.jpeg)
.jpeg)
.jpg)
.jpeg)
.jpg)
.jpeg)
.jpeg)
.jpg)
.jpg)
.jpeg)
|
kamyar-mroadian/NLP_HF_Workshop
|
kamyar-mroadian
| 2024-03-07T16:41:04Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"sentimental-analysis",
"emotion",
"en",
"dataset:dair-ai/emotion",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T16:25:41Z |
---
license: mit
datasets:
- dair-ai/emotion
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- sentimental-analysis
- roberta
- emotion
---
# Model Card for NLP-HF-Workshop
<!-- Provide a quick summary of what the model is/does. -->
This model uses the dai-ai/emotion data set to perform a text classification task on 6 emotions in this dataset using fine tuned FacebookAI/Roberta-Base model.
|
Vas123/130000
|
Vas123
| 2024-03-07T16:39:51Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gptj",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T14:11:56Z |
---
tags:
- generated_from_trainer
model-index:
- name: '130000'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 130000
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.92 | 3 | 6.2222 |
| No log | 1.85 | 6 | 6.2146 |
| No log | 2.77 | 9 | 6.2032 |
| 5.9665 | 4.0 | 13 | 6.1877 |
| 5.9665 | 4.92 | 16 | 6.1734 |
| 5.9665 | 5.85 | 19 | 6.1620 |
| 5.8921 | 6.77 | 22 | 6.1539 |
| 5.8921 | 8.0 | 26 | 6.1426 |
| 5.8921 | 8.92 | 29 | 6.1335 |
| 5.8324 | 9.85 | 32 | 6.1277 |
| 5.8324 | 10.77 | 35 | 6.1178 |
| 5.8324 | 12.0 | 39 | 6.1105 |
| 5.8012 | 12.92 | 42 | 6.1059 |
| 5.8012 | 13.85 | 45 | 6.0992 |
| 5.8012 | 14.77 | 48 | 6.0959 |
| 5.7449 | 16.0 | 52 | 6.0910 |
| 5.7449 | 16.92 | 55 | 6.0859 |
| 5.7449 | 17.85 | 58 | 6.0819 |
| 5.7303 | 18.77 | 61 | 6.0767 |
| 5.7303 | 20.0 | 65 | 6.0734 |
| 5.7303 | 20.92 | 68 | 6.0721 |
| 5.6687 | 21.85 | 71 | 6.0694 |
| 5.6687 | 22.77 | 74 | 6.0658 |
| 5.6687 | 24.0 | 78 | 6.0628 |
| 5.6839 | 24.92 | 81 | 6.0627 |
| 5.6839 | 25.85 | 84 | 6.0600 |
| 5.6839 | 26.77 | 87 | 6.0586 |
| 5.6499 | 28.0 | 91 | 6.0572 |
| 5.6499 | 28.92 | 94 | 6.0558 |
| 5.6499 | 29.85 | 97 | 6.0555 |
| 5.6703 | 30.77 | 100 | 6.0545 |
| 5.6703 | 32.0 | 104 | 6.0533 |
| 5.6703 | 32.92 | 107 | 6.0520 |
| 5.6404 | 33.85 | 110 | 6.0518 |
| 5.6404 | 34.77 | 113 | 6.0511 |
| 5.6404 | 36.0 | 117 | 6.0509 |
| 5.6414 | 36.92 | 120 | 6.0504 |
| 5.6414 | 37.85 | 123 | 6.0498 |
| 5.6414 | 38.77 | 126 | 6.0498 |
| 5.6347 | 40.0 | 130 | 6.0496 |
| 5.6347 | 40.92 | 133 | 6.0493 |
| 5.6347 | 41.85 | 136 | 6.0491 |
| 5.6347 | 42.77 | 139 | 6.0491 |
| 5.638 | 44.0 | 143 | 6.0491 |
| 5.638 | 44.92 | 146 | 6.0491 |
| 5.638 | 45.85 | 149 | 6.0491 |
| 5.6249 | 46.15 | 150 | 6.0491 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
justdimaa/Reinforce-Pixelcopter-PLE-v0
|
justdimaa
| 2024-03-07T16:34:31Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T14:58:57Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 36.40 +/- 23.41
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BlitherBoom/sarsa-FrozenLake-v1-4x4-noSlippery-0307
|
BlitherBoom
| 2024-03-07T16:32:41Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"sarsa",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T16:32:39Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- sarsa
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: sarsa-FrozenLake-v1-4x4-noSlippery-0307
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 0.90 +/- 0.30
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BlitherBoom/sarsa-FrozenLake-v1-4x4-noSlippery-0307", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
field2437/phi-2-platypus-Commercial-lora
|
field2437
| 2024-03-07T16:31:15Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"dataset:kyujinpy/Open-platypus-Commercial",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T01:21:15Z |
---
language:
- en
datasets:
- kyujinpy/Open-platypus-Commercial
library_name: transformers
pipeline_tag: text-generation
license: mit
---
# **phi-2-platypus-Commercial-lora**
## Model Details
**Model Developers**
- field2437
**Base Model**
- microsoft/phi-2(https://huggingface.co/microsoft/phi-2)
**Training Dataset**
- kyujinpy/Open-platypus-Commercial(https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial)
---
# Model comparisons1
> AI-Harness evaluation; [link](https://github.com/EleutherAI/lm-evaluation-harness)
| Model | Copa | HellaSwag | BoolQ | MMLU |
| --- | --- | --- | --- | --- |
| | 0-shot | 0-shot | 0-shot | 0-shot |
| **phi-2-platypus-Commercial-lora** | 0.8900 | 0.5573 | 0.8260 | 0.5513 |
---
# Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("field2437/phi-2-platypus-Commercial-lora", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("field2437/phi-2-platypus-Commercial-lora", trust_remote_code=True)
inputs = tokenizer('''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Let $f(x)$ be the polynomial \\[f(x)=3x^4+5x^2-9x-2.\\] If $g(x)$ is equal to the polynomial $f(x-1)$, what is the sum of the coefficients of $g$?
### Response:
''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
---
|
ibm-research/re2g-reranker-nq
|
ibm-research
| 2024-03-07T16:30:08Z | 463 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"information retrieval",
"reranking",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-29T16:05:21Z |
---
tags:
- information retrieval
- reranking
license: apache-2.0
---
# Model Card for NQ Reranker in Re2G
# Model Details
> The approach of RAG, Multi-DPR, and KGI is to train a neural IR (Information Retrieval) component and further train it end-to-end through its impact in generating the correct output.
>
>It has been previously established that results from initial retrieval can be greatly improved through the use of a reranker. Therefore we hypothesized that natural language generation systems incorporating retrieval can benefit from reranking.
>
>In addition to improving the ranking of passages returned from DPR, a reranker can be used after merging the results of multiple retrieval methods with incomparable scores. For example, the scores returned by BM25 are not comparable to the inner products from DPR. Using the scores from a reranker, we can find the top-k documents from the union of DPR and BM25 results. The figure below illustrates our extension of RAG with a reranker. We call our system Re2G (*Re*trieve, *Re*rank, *G*enerate).
<img src="https://github.com/IBM/kgi-slot-filling/raw/re2g/model_cards/Re2G_Arch2.png" width="100%">
## Training, Evaluation and Inference
The code for training, evaluation and inference is in our github in the [re2g branch](https://github.com/IBM/kgi-slot-filling/tree/re2g).
## Usage
The best way to use the model is by adapting the [reranker_apply.py](https://github.com/IBM/kgi-slot-filling/blob/re2g/reranker/reranker_apply.py)
## Citation
```
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
## Model Description
The model creators note in the [associated paper](https://aclanthology.org/2022.naacl-main.194.pdf):
> As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.
- **Developed by:** IBM
- **Shared by [Optional]:** IBM
- **Model type:** Query/Passage Reranker
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Parent Model:** [BERT-base trained on MSMARCO](https://huggingface.co/nboost/pt-bert-base-uncased-msmarco)
- **Resources for more information:**
- [GitHub Repo](https://github.com/IBM/kgi-slot-filling)
- [Associated Paper](https://aclanthology.org/2022.naacl-main.194.pdf)
# Uses
## Direct Use
This model can be used for the task of reranking passage results for a question.
# Citation
**BibTeX:**
```bibtex
@inproceedings{glass-etal-2022-re2g,
title = "{R}e2{G}: Retrieve, Rerank, Generate",
author = "Glass, Michael and
Rossiello, Gaetano and
Chowdhury, Md Faisal Mahbub and
Naik, Ankita and
Cai, Pengshan and
Gliozzo, Alfio",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.194",
doi = "10.18653/v1/2022.naacl-main.194",
pages = "2701--2715",
abstract = "As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact checking and dialog, with relative gains of 9{\%} to 34{\%} over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source.",
}
```
|
RefalMachine/ruadapt_solar_10.7_part2_v5_rsg_lora
|
RefalMachine
| 2024-03-07T16:27:49Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"region:us"
] | null | 2024-03-07T12:28:52Z |
---
tags:
- generated_from_trainer
model-index:
- name: ruadapt_solar_10.7_part2_v5_rsg_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ruadapt_solar_10.7_part2_v5_rsg_lora
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1006 | 0.45 | 100 | 0.0801 |
| 0.0792 | 0.9 | 200 | 0.0694 |
| 0.0543 | 1.35 | 300 | 0.0840 |
| 0.0486 | 1.8 | 400 | 0.0599 |
| 0.016 | 2.24 | 500 | 0.0724 |
| 0.0278 | 2.69 | 600 | 0.0721 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.2
- Datasets 2.14.4
- Tokenizers 0.14.1
|
s14pe/Reinforce-2
|
s14pe
| 2024-03-07T16:25:58Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T16:25:56Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 23.50 +/- 12.20
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
MAsad789565/3DIcon_v4
|
MAsad789565
| 2024-03-07T16:22:55Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-03-07T15:58:18Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 3d icon of a chef
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
megaaziib/aziibpixelmix
|
megaaziib
| 2024-03-07T16:15:43Z | 147 | 4 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"art",
"artistic",
"anime",
"pixel art",
"en",
"license:other",
"region:us"
] |
text-to-image
| 2023-12-23T23:36:51Z |
---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- art
- artistic
- diffusers
- anime
- pixel art
inference: false
---
# AziibPixelMix
## Official Repository
Read more about this model here: https://civitai.com/models/195730/aziibpixelmix
Also please support by giving 5 stars and a heart, which will notify new updates.
Please consider supporting me on kofi
- https://ko-fi.com/megaaziib
|
Ashwini1412/wav2vec2-nepali-itr-9
|
Ashwini1412
| 2024-03-07T16:11:00Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-07T15:32:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lsr42/dense_sparse_qmlp_dmlm_msmarco_distil_l1_0.0_0.000001_q_encoder
|
lsr42
| 2024-03-07T16:07:27Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"MLP",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T16:07:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Shaurya25/God-test
|
Shaurya25
| 2024-03-07T16:01:32Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"mistralai/Mistral-7B-Instruct-v0.2",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T15:59:48Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- mistralai/Mistral-7B-Instruct-v0.2
- mistralai/Mistral-7B-Instruct-v0.2
---
# God-test
Hey there! 👋 Welcome to the God-test! This is a merge of multiple models brought together using the awesome .
Let's see what we've got in this merge:
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) 🚀
* [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) 🚀
## 🧩 Configuration
```yaml
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 8]
model: mistralai/Mistral-7B-Instruct-v0.2
- sources:
- layer_range: [4, 12]
model: mistralai/Mistral-7B-Instruct-v0.2
|
MarcGrumpyOlejak/VerwaltungsAnthologie_clear2_7B
|
MarcGrumpyOlejak
| 2024-03-07T16:01:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:DRXD1000/Phoenix-7B",
"base_model:merge:DRXD1000/Phoenix-7B",
"base_model:VAGOsolutions/SauerkrautLM-7b-LaserChat",
"base_model:merge:VAGOsolutions/SauerkrautLM-7b-LaserChat",
"base_model:hiig-ai-lab/simba-v01c",
"base_model:merge:hiig-ai-lab/simba-v01c",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T15:56:04Z |
---
base_model:
- VAGOsolutions/SauerkrautLM-7b-LaserChat
- mistralai/Mistral-7B-v0.1
- DRXD1000/Phoenix
- hiig-piai/simba-v01c
library_name: transformers
tags:
- mergekit
- merge
---
# VerwaltungsAnthologie_clear2_7B
This model is used as an intermediate model for future merges.
It is a merge of 4 pre-trained language models based upon Mistral-7B-v0.1 created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [VAGOsolutions/SauerkrautLM-7b-LaserChat](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-LaserChat)
* [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix)
* [hiig-piai/simba-v01c](https://huggingface.co/hiig-piai/simba-v01c)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# works but never stops
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: VAGOsolutions/SauerkrautLM-7b-LaserChat
parameters:
density: 0.53
weight: 0.225
- model: hiig-piai/simba-v01c
parameters:
density: 0.53
weight: 0.55
- model: DRXD1000/Phoenix
parameters:
density: 0.53
weight: 0.225
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
name: VerwaltungsAnthologie_clear2_7B
```
|
dragoa/distilbert-base-uncased-finetuned-emotion
|
dragoa
| 2024-03-07T15:55:36Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T15:51:08Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.926095800480484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2181
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8483 | 1.0 | 250 | 0.3125 | 0.902 | 0.9016 |
| 0.2429 | 2.0 | 500 | 0.2181 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Croolch/q-Taxi-v3
|
Croolch
| 2024-03-07T15:55:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T15:54:23Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Croolch/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
iamjhonathan/my_awesome_test_model
|
iamjhonathan
| 2024-03-07T15:53:49Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"text-classification",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-07T15:48:00Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_test_model
results: []
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_test_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.96 | 6 | 14.2734 |
| No log | 1.92 | 12 | 13.0301 |
| No log | 2.88 | 18 | 12.4261 |
| No log | 3.84 | 24 | 11.9343 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
itsliupeng/Mixtral-8x7B-v0.1-top3
|
itsliupeng
| 2024-03-07T15:43:32Z | 1,534 | 3 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-18T03:16:26Z |
---
license: apache-2.0
model-index:
- name: Mixtral-8x7B-v0.1-top3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.41
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 71.98
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.58
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.4
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 57.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=itsliupeng/Mixtral-8x7B-v0.1-top3
name: Open LLM Leaderboard
---
## Just to obtain metrics from the `HuggingFaceH4/open_llm_leaderboard`.
To evaluate the impact of increasing the number of experts, modify the `num_experts_per_tok` setting in the `config.json` file from 2 to 3. This alteration aims to specifically determine if such a change leads to any notable improvements in performance metrics.
Other details to note include that the model weights are directly copied from the source available at https://huggingface.co/mistralai/Mixtral-8x7B-v0.1.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_itsliupeng__Mixtral-8x7B-v0.1-top3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |69.09|
|AI2 Reasoning Challenge (25-Shot)|67.41|
|HellaSwag (10-Shot) |86.63|
|MMLU (5-Shot) |71.98|
|TruthfulQA (0-shot) |48.58|
|Winogrande (5-shot) |82.40|
|GSM8k (5-shot) |57.54|
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-clean-07
|
alinerodrigues
| 2024-03-07T15:38:47Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-07T12:08:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-clean-07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-clean-07
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1152
- Wer: 0.0803
- Cer: 0.0225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 27.1622 | 1.0 | 67 | 4.2745 | 0.9835 | 0.9326 |
| 5.7807 | 2.0 | 134 | 3.6146 | 0.9890 | 0.9738 |
| 3.5248 | 3.0 | 201 | 2.8795 | 1.0 | 1.0 |
| 3.5248 | 4.0 | 268 | 1.4248 | 0.9997 | 0.4401 |
| 2.1437 | 5.0 | 335 | 0.1879 | 0.1313 | 0.0371 |
| 0.4218 | 6.0 | 402 | 0.1493 | 0.1158 | 0.0297 |
| 0.4218 | 7.0 | 469 | 0.1478 | 0.0934 | 0.0269 |
| 0.29 | 8.0 | 536 | 0.1387 | 0.0879 | 0.0254 |
| 0.2613 | 9.0 | 603 | 0.1240 | 0.0810 | 0.0242 |
| 0.2613 | 10.0 | 670 | 0.1322 | 0.0879 | 0.0257 |
| 0.2155 | 11.0 | 737 | 0.1315 | 0.0882 | 0.0258 |
| 0.1968 | 12.0 | 804 | 0.1238 | 0.0827 | 0.0239 |
| 0.1968 | 13.0 | 871 | 0.1231 | 0.0862 | 0.0242 |
| 0.1878 | 14.0 | 938 | 0.1160 | 0.0917 | 0.0251 |
| 0.1691 | 15.0 | 1005 | 0.1152 | 0.0803 | 0.0225 |
| 0.1691 | 16.0 | 1072 | 0.1348 | 0.0851 | 0.0243 |
| 0.1654 | 17.0 | 1139 | 0.1224 | 0.0807 | 0.0233 |
| 0.1467 | 18.0 | 1206 | 0.1228 | 0.0865 | 0.0245 |
| 0.1467 | 19.0 | 1273 | 0.1231 | 0.0807 | 0.0228 |
| 0.1356 | 20.0 | 1340 | 0.1245 | 0.0807 | 0.0237 |
| 0.1355 | 21.0 | 1407 | 0.1329 | 0.0841 | 0.0247 |
| 0.1355 | 22.0 | 1474 | 0.1294 | 0.0841 | 0.0244 |
| 0.1211 | 23.0 | 1541 | 0.1247 | 0.0782 | 0.0221 |
| 0.1331 | 24.0 | 1608 | 0.1249 | 0.0796 | 0.0228 |
| 0.1331 | 25.0 | 1675 | 0.1257 | 0.0789 | 0.0233 |
| 0.1079 | 26.0 | 1742 | 0.1260 | 0.0903 | 0.0247 |
| 0.1106 | 27.0 | 1809 | 0.1279 | 0.0765 | 0.0222 |
| 0.1106 | 28.0 | 1876 | 0.1295 | 0.0814 | 0.0233 |
| 0.0926 | 29.0 | 1943 | 0.1318 | 0.0831 | 0.0240 |
| 0.1086 | 30.0 | 2010 | 0.1324 | 0.0855 | 0.0237 |
| 0.1086 | 31.0 | 2077 | 0.1285 | 0.0820 | 0.0232 |
| 0.1039 | 32.0 | 2144 | 0.1224 | 0.0814 | 0.0220 |
| 0.0915 | 33.0 | 2211 | 0.1312 | 0.0834 | 0.0231 |
| 0.0915 | 34.0 | 2278 | 0.1290 | 0.0779 | 0.0222 |
| 0.0846 | 35.0 | 2345 | 0.1265 | 0.0789 | 0.0226 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
kajama/calculator_model_test
|
kajama
| 2024-03-07T15:35:28Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-07T13:03:04Z |
---
tags:
- generated_from_trainer
model-index:
- name: calculator_model_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# calculator_model_test
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 512
- eval_batch_size: 512
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.392 | 1.0 | 6 | 2.7421 |
| 2.3678 | 2.0 | 12 | 1.9842 |
| 1.8103 | 3.0 | 18 | 1.6702 |
| 1.6444 | 4.0 | 24 | 1.5900 |
| 1.5998 | 5.0 | 30 | 1.5549 |
| 1.6759 | 6.0 | 36 | 1.5762 |
| 1.5344 | 7.0 | 42 | 1.5907 |
| 1.5276 | 8.0 | 48 | 1.5784 |
| 1.5187 | 9.0 | 54 | 1.5224 |
| 1.5007 | 10.0 | 60 | 1.4601 |
| 1.4285 | 11.0 | 66 | 1.4192 |
| 1.3919 | 12.0 | 72 | 1.3792 |
| 1.3663 | 13.0 | 78 | 1.3728 |
| 1.3035 | 14.0 | 84 | 1.2453 |
| 1.2542 | 15.0 | 90 | 1.2191 |
| 1.2343 | 16.0 | 96 | 1.1600 |
| 1.1959 | 17.0 | 102 | 1.1465 |
| 1.1617 | 18.0 | 108 | 1.0958 |
| 1.1189 | 19.0 | 114 | 1.0729 |
| 1.1026 | 20.0 | 120 | 1.1611 |
| 1.1227 | 21.0 | 126 | 1.0368 |
| 1.0386 | 22.0 | 132 | 1.0107 |
| 0.9962 | 23.0 | 138 | 0.9677 |
| 0.9762 | 24.0 | 144 | 0.9360 |
| 0.9474 | 25.0 | 150 | 0.9168 |
| 0.9317 | 26.0 | 156 | 0.9569 |
| 0.9156 | 27.0 | 162 | 0.9376 |
| 0.9061 | 28.0 | 168 | 0.9363 |
| 0.9147 | 29.0 | 174 | 0.9067 |
| 0.9141 | 30.0 | 180 | 0.8845 |
| 0.8753 | 31.0 | 186 | 0.8666 |
| 0.8572 | 32.0 | 192 | 0.8369 |
| 0.848 | 33.0 | 198 | 0.8324 |
| 0.8231 | 34.0 | 204 | 0.7965 |
| 0.8167 | 35.0 | 210 | 0.7844 |
| 0.8004 | 36.0 | 216 | 0.7741 |
| 0.7786 | 37.0 | 222 | 0.7700 |
| 0.8023 | 38.0 | 228 | 0.7571 |
| 0.7799 | 39.0 | 234 | 0.7593 |
| 0.7947 | 40.0 | 240 | 0.7518 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Grubbe2/q-FrozenLake-v1-4x4-noSlippery
|
Grubbe2
| 2024-03-07T15:34:22Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T15:34:19Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Grubbe2/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
BlitherBoom/q-FrozenLake-v1-4x4-noSlippery-0307
|
BlitherBoom
| 2024-03-07T15:33:46Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-07T15:33:42Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-0307
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="BlitherBoom/q-FrozenLake-v1-4x4-noSlippery-0307", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JustData/CatPPT-7B-GGUF
|
JustData
| 2024-03-07T15:30:31Z | 2 | 0 | null |
[
"gguf",
"base_model:rishiraj/CatPPT-base",
"base_model:quantized:rishiraj/CatPPT-base",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T15:22:44Z |
---
inference: false
license: apache-2.0
model_creator: rishiraj
model_name: CatPPT
base_model: rishiraj/CatPPT-base
---
# CatPPT 7B - GGUF
- Model creator: [Rishiraj Acharya](https://huggingface.co/rishiraj)
- Original model: [CatPPT](https://huggingface.co/rishiraj/CatPPT)
Quantized GGUF version of the model CatPPT (instruct), using Llama.Cpp [Convert.py](https://github.com/ggerganov/llama.cpp/blob/master/convert.py)
|
lachkarsalim/Helsinki-translation-English_Moroccan-Arabic
|
lachkarsalim
| 2024-03-07T15:28:54Z | 108 | 7 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"en",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-06T22:19:51Z |
---
language:
- en
- ar
---
Model Description :
This model is a fine-tuned version of a Transformer Model Helsinki-en-ar, adapted to translate text from English to Darija (Moroccan Arabic) . The fine-tuning process was conducted on a substantial dataset.
Fine-tuning Details
Source Model: Helsinki-NLP/opus-mt-en-ar
Fine-tuning Objective: To adapt the pre-existing English to Arabic translation model to perform translations from English to Arabic Darija .
Dataset: The model was fine-tuned using the Darija Open Dataset (DODa), an open-source project dedicated to the Moroccan dialect. DODa contains approximately 150,000 entries, making it one of the largest open-source collaborative projects for Darija <=> English translation aimed at Natural Language Processing (NLP) applications.
Training Examples: More than 15,000 translation pairs from Darija to Arabic were used for fine-tuning.
Training Time: The fine-tuning process took approximately 8 hours to complete
Acknowledgments :
I would like to acknowledge the contributors to the Darija Open Dataset (DODa) for providing an extensive and valuable resource for training this model. Their effort in building the largest open-source Darija dataset has significantly facilitated research and development in NLP applications tailored to Moroccan Arabic.
|
Ashwini1412/wav2vec2-nepali-itr-8
|
Ashwini1412
| 2024-03-07T15:28:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-07T13:51:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOOwO/eacc_tp2
|
OwOOwO
| 2024-03-07T15:18:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T15:15:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bibrani/bibrani-fb-opt-125m-ultrachat-10k-chatml
|
bibrani
| 2024-03-07T15:10:24Z | 4 | 0 |
peft
|
[
"peft",
"pytorch",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"region:us"
] | null | 2024-03-07T14:46:50Z |
---
library_name: peft
base_model: facebook/opt-125m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0
|
Alpaca69B/gemma-2b-absa-2epoches
|
Alpaca69B
| 2024-03-07T15:10:07Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-24T09:18:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
2epoches
r 32
alpha 64
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
farid1088/GQA_RoBERTa_German_legal_SQuAD_part_augmented_1000
|
farid1088
| 2024-03-07T15:06:42Z | 23 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-03-06T00:57:17Z |
---
tags:
- generated_from_trainer
model-index:
- name: GQA_RoBERTa_German_legal_SQuAD_part_augmented_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GQA_RoBERTa_German_legal_SQuAD_part_augmented_1000
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.7757 |
| No log | 2.0 | 8 | 3.1210 |
| No log | 3.0 | 12 | 2.7424 |
| No log | 4.0 | 16 | 2.3990 |
| No log | 5.0 | 20 | 2.0583 |
| No log | 6.0 | 24 | 1.9699 |
| No log | 7.0 | 28 | 1.6942 |
| No log | 8.0 | 32 | 1.5022 |
| No log | 9.0 | 36 | 1.4585 |
| No log | 10.0 | 40 | 1.1937 |
| No log | 11.0 | 44 | 1.1496 |
| No log | 12.0 | 48 | 0.9856 |
| No log | 13.0 | 52 | 0.9389 |
| No log | 14.0 | 56 | 0.9621 |
| No log | 15.0 | 60 | 0.8580 |
| No log | 16.0 | 64 | 0.8093 |
| No log | 17.0 | 68 | 0.7783 |
| No log | 18.0 | 72 | 0.7656 |
| No log | 19.0 | 76 | 0.7793 |
| No log | 20.0 | 80 | 0.7327 |
| No log | 21.0 | 84 | 0.7109 |
| No log | 22.0 | 88 | 0.7120 |
| No log | 23.0 | 92 | 0.7099 |
| No log | 24.0 | 96 | 0.7191 |
| No log | 25.0 | 100 | 0.7350 |
| No log | 26.0 | 104 | 0.7634 |
| No log | 27.0 | 108 | 0.7498 |
| No log | 28.0 | 112 | 0.7353 |
| No log | 29.0 | 116 | 0.7319 |
| No log | 30.0 | 120 | 0.7603 |
| No log | 31.0 | 124 | 0.7701 |
| No log | 32.0 | 128 | 0.7818 |
| No log | 33.0 | 132 | 0.7904 |
| No log | 34.0 | 136 | 0.7580 |
| No log | 35.0 | 140 | 0.7640 |
| No log | 36.0 | 144 | 0.7558 |
| No log | 37.0 | 148 | 0.7470 |
| No log | 38.0 | 152 | 0.7730 |
| No log | 39.0 | 156 | 0.7450 |
| No log | 40.0 | 160 | 0.7516 |
| No log | 41.0 | 164 | 0.7475 |
| No log | 42.0 | 168 | 0.7306 |
| No log | 43.0 | 172 | 0.7488 |
| No log | 44.0 | 176 | 0.7604 |
| No log | 45.0 | 180 | 0.8035 |
| No log | 46.0 | 184 | 0.7837 |
| No log | 47.0 | 188 | 0.7307 |
| No log | 48.0 | 192 | 0.6987 |
| No log | 49.0 | 196 | 0.7281 |
| No log | 50.0 | 200 | 0.7453 |
| No log | 51.0 | 204 | 0.7811 |
| No log | 52.0 | 208 | 0.7951 |
| No log | 53.0 | 212 | 0.7833 |
| No log | 54.0 | 216 | 0.7961 |
| No log | 55.0 | 220 | 0.8255 |
| No log | 56.0 | 224 | 0.8038 |
| No log | 57.0 | 228 | 0.8384 |
| No log | 58.0 | 232 | 0.8412 |
| No log | 59.0 | 236 | 0.8206 |
| No log | 60.0 | 240 | 0.8224 |
| No log | 61.0 | 244 | 0.8638 |
| No log | 62.0 | 248 | 0.9014 |
| No log | 63.0 | 252 | 0.9255 |
| No log | 64.0 | 256 | 0.9019 |
| No log | 65.0 | 260 | 0.8741 |
| No log | 66.0 | 264 | 0.8442 |
| No log | 67.0 | 268 | 0.8526 |
| No log | 68.0 | 272 | 0.8702 |
| No log | 69.0 | 276 | 0.9321 |
| No log | 70.0 | 280 | 0.9450 |
| No log | 71.0 | 284 | 0.8868 |
| No log | 72.0 | 288 | 0.8622 |
| No log | 73.0 | 292 | 0.8586 |
| No log | 74.0 | 296 | 0.8935 |
| No log | 75.0 | 300 | 0.9010 |
| No log | 76.0 | 304 | 0.8703 |
| No log | 77.0 | 308 | 0.8726 |
| No log | 78.0 | 312 | 0.9113 |
| No log | 79.0 | 316 | 0.9175 |
| No log | 80.0 | 320 | 0.9173 |
| No log | 81.0 | 324 | 0.9550 |
| No log | 82.0 | 328 | 0.9649 |
| No log | 83.0 | 332 | 0.9917 |
| No log | 84.0 | 336 | 0.9783 |
| No log | 85.0 | 340 | 0.9558 |
| No log | 86.0 | 344 | 0.9425 |
| No log | 87.0 | 348 | 0.9323 |
| No log | 88.0 | 352 | 0.9471 |
| No log | 89.0 | 356 | 0.9749 |
| No log | 90.0 | 360 | 0.9638 |
| No log | 91.0 | 364 | 0.9881 |
| No log | 92.0 | 368 | 0.9697 |
| No log | 93.0 | 372 | 0.9189 |
| No log | 94.0 | 376 | 0.9036 |
| No log | 95.0 | 380 | 0.8745 |
| No log | 96.0 | 384 | 0.8811 |
| No log | 97.0 | 388 | 0.8967 |
| No log | 98.0 | 392 | 0.9032 |
| No log | 99.0 | 396 | 0.9201 |
| No log | 100.0 | 400 | 0.9524 |
| No log | 101.0 | 404 | 0.9983 |
| No log | 102.0 | 408 | 0.9742 |
| No log | 103.0 | 412 | 0.9834 |
| No log | 104.0 | 416 | 0.9480 |
| No log | 105.0 | 420 | 0.9367 |
| No log | 106.0 | 424 | 0.9340 |
| No log | 107.0 | 428 | 0.9454 |
| No log | 108.0 | 432 | 0.9553 |
| No log | 109.0 | 436 | 0.9694 |
| No log | 110.0 | 440 | 0.9696 |
| No log | 111.0 | 444 | 0.9280 |
| No log | 112.0 | 448 | 0.9166 |
| No log | 113.0 | 452 | 0.9406 |
| No log | 114.0 | 456 | 0.9372 |
| No log | 115.0 | 460 | 0.9147 |
| No log | 116.0 | 464 | 0.9267 |
| No log | 117.0 | 468 | 0.9665 |
| No log | 118.0 | 472 | 1.0231 |
| No log | 119.0 | 476 | 1.0291 |
| No log | 120.0 | 480 | 0.9973 |
| No log | 121.0 | 484 | 0.9516 |
| No log | 122.0 | 488 | 0.9134 |
| No log | 123.0 | 492 | 0.8852 |
| No log | 124.0 | 496 | 0.8535 |
| 0.9595 | 125.0 | 500 | 0.9003 |
| 0.9595 | 126.0 | 504 | 0.9523 |
| 0.9595 | 127.0 | 508 | 0.9925 |
| 0.9595 | 128.0 | 512 | 0.9736 |
| 0.9595 | 129.0 | 516 | 0.9584 |
| 0.9595 | 130.0 | 520 | 0.9625 |
| 0.9595 | 131.0 | 524 | 0.9533 |
| 0.9595 | 132.0 | 528 | 0.9774 |
| 0.9595 | 133.0 | 532 | 0.9898 |
| 0.9595 | 134.0 | 536 | 0.9657 |
| 0.9595 | 135.0 | 540 | 0.9627 |
| 0.9595 | 136.0 | 544 | 1.0049 |
| 0.9595 | 137.0 | 548 | 1.0241 |
| 0.9595 | 138.0 | 552 | 1.0184 |
| 0.9595 | 139.0 | 556 | 1.0387 |
| 0.9595 | 140.0 | 560 | 1.0528 |
| 0.9595 | 141.0 | 564 | 1.0510 |
| 0.9595 | 142.0 | 568 | 1.0153 |
| 0.9595 | 143.0 | 572 | 0.9628 |
| 0.9595 | 144.0 | 576 | 0.9999 |
| 0.9595 | 145.0 | 580 | 1.0139 |
| 0.9595 | 146.0 | 584 | 1.0149 |
| 0.9595 | 147.0 | 588 | 1.0016 |
| 0.9595 | 148.0 | 592 | 0.9516 |
| 0.9595 | 149.0 | 596 | 0.9290 |
| 0.9595 | 150.0 | 600 | 0.9084 |
| 0.9595 | 151.0 | 604 | 0.8736 |
| 0.9595 | 152.0 | 608 | 0.8832 |
| 0.9595 | 153.0 | 612 | 0.9093 |
| 0.9595 | 154.0 | 616 | 0.9489 |
| 0.9595 | 155.0 | 620 | 0.9548 |
| 0.9595 | 156.0 | 624 | 0.8944 |
| 0.9595 | 157.0 | 628 | 0.8681 |
| 0.9595 | 158.0 | 632 | 0.8733 |
| 0.9595 | 159.0 | 636 | 0.8852 |
| 0.9595 | 160.0 | 640 | 0.9133 |
| 0.9595 | 161.0 | 644 | 0.8900 |
| 0.9595 | 162.0 | 648 | 0.8863 |
| 0.9595 | 163.0 | 652 | 0.8928 |
| 0.9595 | 164.0 | 656 | 0.8959 |
| 0.9595 | 165.0 | 660 | 0.9163 |
| 0.9595 | 166.0 | 664 | 0.9739 |
| 0.9595 | 167.0 | 668 | 1.0204 |
| 0.9595 | 168.0 | 672 | 1.0059 |
| 0.9595 | 169.0 | 676 | 0.9578 |
| 0.9595 | 170.0 | 680 | 0.9313 |
| 0.9595 | 171.0 | 684 | 0.9084 |
| 0.9595 | 172.0 | 688 | 0.9836 |
| 0.9595 | 173.0 | 692 | 1.0601 |
| 0.9595 | 174.0 | 696 | 1.0884 |
| 0.9595 | 175.0 | 700 | 1.0779 |
| 0.9595 | 176.0 | 704 | 1.0599 |
| 0.9595 | 177.0 | 708 | 1.0422 |
| 0.9595 | 178.0 | 712 | 1.0271 |
| 0.9595 | 179.0 | 716 | 1.0100 |
| 0.9595 | 180.0 | 720 | 0.9945 |
| 0.9595 | 181.0 | 724 | 1.0018 |
| 0.9595 | 182.0 | 728 | 1.0234 |
| 0.9595 | 183.0 | 732 | 1.0380 |
| 0.9595 | 184.0 | 736 | 1.0525 |
| 0.9595 | 185.0 | 740 | 1.0420 |
| 0.9595 | 186.0 | 744 | 1.0325 |
| 0.9595 | 187.0 | 748 | 1.0125 |
| 0.9595 | 188.0 | 752 | 0.9891 |
| 0.9595 | 189.0 | 756 | 0.9515 |
| 0.9595 | 190.0 | 760 | 0.9495 |
| 0.9595 | 191.0 | 764 | 0.9642 |
| 0.9595 | 192.0 | 768 | 0.9876 |
| 0.9595 | 193.0 | 772 | 0.9985 |
| 0.9595 | 194.0 | 776 | 1.0227 |
| 0.9595 | 195.0 | 780 | 1.0730 |
| 0.9595 | 196.0 | 784 | 1.0871 |
| 0.9595 | 197.0 | 788 | 1.0918 |
| 0.9595 | 198.0 | 792 | 1.1092 |
| 0.9595 | 199.0 | 796 | 1.0989 |
| 0.9595 | 200.0 | 800 | 1.0992 |
| 0.9595 | 201.0 | 804 | 1.1034 |
| 0.9595 | 202.0 | 808 | 1.0881 |
| 0.9595 | 203.0 | 812 | 1.0707 |
| 0.9595 | 204.0 | 816 | 1.0777 |
| 0.9595 | 205.0 | 820 | 1.0758 |
| 0.9595 | 206.0 | 824 | 1.0684 |
| 0.9595 | 207.0 | 828 | 1.0629 |
| 0.9595 | 208.0 | 832 | 1.0659 |
| 0.9595 | 209.0 | 836 | 1.0585 |
| 0.9595 | 210.0 | 840 | 1.0132 |
| 0.9595 | 211.0 | 844 | 0.9791 |
| 0.9595 | 212.0 | 848 | 0.9761 |
| 0.9595 | 213.0 | 852 | 1.0348 |
| 0.9595 | 214.0 | 856 | 1.0910 |
| 0.9595 | 215.0 | 860 | 1.1354 |
| 0.9595 | 216.0 | 864 | 1.1348 |
| 0.9595 | 217.0 | 868 | 1.0884 |
| 0.9595 | 218.0 | 872 | 1.0430 |
| 0.9595 | 219.0 | 876 | 1.0202 |
| 0.9595 | 220.0 | 880 | 1.0097 |
| 0.9595 | 221.0 | 884 | 1.0151 |
| 0.9595 | 222.0 | 888 | 1.0096 |
| 0.9595 | 223.0 | 892 | 1.0302 |
| 0.9595 | 224.0 | 896 | 1.0635 |
| 0.9595 | 225.0 | 900 | 1.0611 |
| 0.9595 | 226.0 | 904 | 1.0548 |
| 0.9595 | 227.0 | 908 | 1.1173 |
| 0.9595 | 228.0 | 912 | 1.1561 |
| 0.9595 | 229.0 | 916 | 1.1550 |
| 0.9595 | 230.0 | 920 | 1.0254 |
| 0.9595 | 231.0 | 924 | 0.9364 |
| 0.9595 | 232.0 | 928 | 0.9316 |
| 0.9595 | 233.0 | 932 | 0.9717 |
| 0.9595 | 234.0 | 936 | 1.0406 |
| 0.9595 | 235.0 | 940 | 1.0643 |
| 0.9595 | 236.0 | 944 | 1.1092 |
| 0.9595 | 237.0 | 948 | 1.1197 |
| 0.9595 | 238.0 | 952 | 1.1270 |
| 0.9595 | 239.0 | 956 | 1.1300 |
| 0.9595 | 240.0 | 960 | 1.0921 |
| 0.9595 | 241.0 | 964 | 1.0446 |
| 0.9595 | 242.0 | 968 | 1.0234 |
| 0.9595 | 243.0 | 972 | 1.0067 |
| 0.9595 | 244.0 | 976 | 1.0324 |
| 0.9595 | 245.0 | 980 | 1.0434 |
| 0.9595 | 246.0 | 984 | 1.0502 |
| 0.9595 | 247.0 | 988 | 1.0618 |
| 0.9595 | 248.0 | 992 | 1.1352 |
| 0.9595 | 249.0 | 996 | 1.1672 |
| 0.4061 | 250.0 | 1000 | 1.1700 |
| 0.4061 | 251.0 | 1004 | 1.1416 |
| 0.4061 | 252.0 | 1008 | 1.1198 |
| 0.4061 | 253.0 | 1012 | 1.1226 |
| 0.4061 | 254.0 | 1016 | 1.1220 |
| 0.4061 | 255.0 | 1020 | 1.1317 |
| 0.4061 | 256.0 | 1024 | 1.1390 |
| 0.4061 | 257.0 | 1028 | 1.1069 |
| 0.4061 | 258.0 | 1032 | 1.0700 |
| 0.4061 | 259.0 | 1036 | 1.0657 |
| 0.4061 | 260.0 | 1040 | 1.0839 |
| 0.4061 | 261.0 | 1044 | 1.1030 |
| 0.4061 | 262.0 | 1048 | 1.1005 |
| 0.4061 | 263.0 | 1052 | 1.0882 |
| 0.4061 | 264.0 | 1056 | 1.0740 |
| 0.4061 | 265.0 | 1060 | 1.0710 |
| 0.4061 | 266.0 | 1064 | 1.0775 |
| 0.4061 | 267.0 | 1068 | 1.0908 |
| 0.4061 | 268.0 | 1072 | 1.1077 |
| 0.4061 | 269.0 | 1076 | 1.1204 |
| 0.4061 | 270.0 | 1080 | 1.1259 |
| 0.4061 | 271.0 | 1084 | 1.1208 |
| 0.4061 | 272.0 | 1088 | 1.1004 |
| 0.4061 | 273.0 | 1092 | 1.0761 |
| 0.4061 | 274.0 | 1096 | 1.0683 |
| 0.4061 | 275.0 | 1100 | 1.0663 |
| 0.4061 | 276.0 | 1104 | 1.0627 |
| 0.4061 | 277.0 | 1108 | 1.1069 |
| 0.4061 | 278.0 | 1112 | 1.1032 |
| 0.4061 | 279.0 | 1116 | 1.0401 |
| 0.4061 | 280.0 | 1120 | 1.0408 |
| 0.4061 | 281.0 | 1124 | 1.1004 |
| 0.4061 | 282.0 | 1128 | 1.1623 |
| 0.4061 | 283.0 | 1132 | 1.1512 |
| 0.4061 | 284.0 | 1136 | 1.1242 |
| 0.4061 | 285.0 | 1140 | 1.0919 |
| 0.4061 | 286.0 | 1144 | 1.0818 |
| 0.4061 | 287.0 | 1148 | 1.0703 |
| 0.4061 | 288.0 | 1152 | 1.0501 |
| 0.4061 | 289.0 | 1156 | 1.0347 |
| 0.4061 | 290.0 | 1160 | 1.0299 |
| 0.4061 | 291.0 | 1164 | 1.0641 |
| 0.4061 | 292.0 | 1168 | 1.0679 |
| 0.4061 | 293.0 | 1172 | 1.0680 |
| 0.4061 | 294.0 | 1176 | 1.1041 |
| 0.4061 | 295.0 | 1180 | 1.1802 |
| 0.4061 | 296.0 | 1184 | 1.1971 |
| 0.4061 | 297.0 | 1188 | 1.1793 |
| 0.4061 | 298.0 | 1192 | 1.1459 |
| 0.4061 | 299.0 | 1196 | 1.1035 |
| 0.4061 | 300.0 | 1200 | 1.0577 |
| 0.4061 | 301.0 | 1204 | 1.0544 |
| 0.4061 | 302.0 | 1208 | 1.0737 |
| 0.4061 | 303.0 | 1212 | 1.0819 |
| 0.4061 | 304.0 | 1216 | 1.0899 |
| 0.4061 | 305.0 | 1220 | 1.0885 |
| 0.4061 | 306.0 | 1224 | 1.0755 |
| 0.4061 | 307.0 | 1228 | 1.0139 |
| 0.4061 | 308.0 | 1232 | 0.9849 |
| 0.4061 | 309.0 | 1236 | 0.9781 |
| 0.4061 | 310.0 | 1240 | 0.9953 |
| 0.4061 | 311.0 | 1244 | 1.0138 |
| 0.4061 | 312.0 | 1248 | 1.0119 |
| 0.4061 | 313.0 | 1252 | 1.0704 |
| 0.4061 | 314.0 | 1256 | 1.1161 |
| 0.4061 | 315.0 | 1260 | 1.1500 |
| 0.4061 | 316.0 | 1264 | 1.1862 |
| 0.4061 | 317.0 | 1268 | 1.1833 |
| 0.4061 | 318.0 | 1272 | 1.1706 |
| 0.4061 | 319.0 | 1276 | 1.1517 |
| 0.4061 | 320.0 | 1280 | 1.1309 |
| 0.4061 | 321.0 | 1284 | 1.0936 |
| 0.4061 | 322.0 | 1288 | 1.0957 |
| 0.4061 | 323.0 | 1292 | 1.1080 |
| 0.4061 | 324.0 | 1296 | 1.1087 |
| 0.4061 | 325.0 | 1300 | 1.1314 |
| 0.4061 | 326.0 | 1304 | 1.1757 |
| 0.4061 | 327.0 | 1308 | 1.1896 |
| 0.4061 | 328.0 | 1312 | 1.1742 |
| 0.4061 | 329.0 | 1316 | 1.1661 |
| 0.4061 | 330.0 | 1320 | 1.1675 |
| 0.4061 | 331.0 | 1324 | 1.1691 |
| 0.4061 | 332.0 | 1328 | 1.1715 |
| 0.4061 | 333.0 | 1332 | 1.1513 |
| 0.4061 | 334.0 | 1336 | 1.1347 |
| 0.4061 | 335.0 | 1340 | 1.1386 |
| 0.4061 | 336.0 | 1344 | 1.1587 |
| 0.4061 | 337.0 | 1348 | 1.1739 |
| 0.4061 | 338.0 | 1352 | 1.1790 |
| 0.4061 | 339.0 | 1356 | 1.1615 |
| 0.4061 | 340.0 | 1360 | 1.1484 |
| 0.4061 | 341.0 | 1364 | 1.1376 |
| 0.4061 | 342.0 | 1368 | 1.1258 |
| 0.4061 | 343.0 | 1372 | 1.1142 |
| 0.4061 | 344.0 | 1376 | 1.1062 |
| 0.4061 | 345.0 | 1380 | 1.0986 |
| 0.4061 | 346.0 | 1384 | 1.0905 |
| 0.4061 | 347.0 | 1388 | 1.0776 |
| 0.4061 | 348.0 | 1392 | 1.0687 |
| 0.4061 | 349.0 | 1396 | 1.0865 |
| 0.4061 | 350.0 | 1400 | 1.0822 |
| 0.4061 | 351.0 | 1404 | 1.0831 |
| 0.4061 | 352.0 | 1408 | 1.0914 |
| 0.4061 | 353.0 | 1412 | 1.1018 |
| 0.4061 | 354.0 | 1416 | 1.1078 |
| 0.4061 | 355.0 | 1420 | 1.1190 |
| 0.4061 | 356.0 | 1424 | 1.1374 |
| 0.4061 | 357.0 | 1428 | 1.1534 |
| 0.4061 | 358.0 | 1432 | 1.2011 |
| 0.4061 | 359.0 | 1436 | 1.2166 |
| 0.4061 | 360.0 | 1440 | 1.2168 |
| 0.4061 | 361.0 | 1444 | 1.2144 |
| 0.4061 | 362.0 | 1448 | 1.1989 |
| 0.4061 | 363.0 | 1452 | 1.1832 |
| 0.4061 | 364.0 | 1456 | 1.1531 |
| 0.4061 | 365.0 | 1460 | 1.1422 |
| 0.4061 | 366.0 | 1464 | 1.1279 |
| 0.4061 | 367.0 | 1468 | 1.1210 |
| 0.4061 | 368.0 | 1472 | 1.1114 |
| 0.4061 | 369.0 | 1476 | 1.1034 |
| 0.4061 | 370.0 | 1480 | 1.0998 |
| 0.4061 | 371.0 | 1484 | 1.1009 |
| 0.4061 | 372.0 | 1488 | 1.1048 |
| 0.4061 | 373.0 | 1492 | 1.1002 |
| 0.4061 | 374.0 | 1496 | 1.0920 |
| 0.4027 | 375.0 | 1500 | 1.0851 |
| 0.4027 | 376.0 | 1504 | 1.0787 |
| 0.4027 | 377.0 | 1508 | 1.0733 |
| 0.4027 | 378.0 | 1512 | 1.0695 |
| 0.4027 | 379.0 | 1516 | 1.0686 |
| 0.4027 | 380.0 | 1520 | 1.0687 |
| 0.4027 | 381.0 | 1524 | 1.0757 |
| 0.4027 | 382.0 | 1528 | 1.1245 |
| 0.4027 | 383.0 | 1532 | 1.1659 |
| 0.4027 | 384.0 | 1536 | 1.1729 |
| 0.4027 | 385.0 | 1540 | 1.1401 |
| 0.4027 | 386.0 | 1544 | 1.1316 |
| 0.4027 | 387.0 | 1548 | 1.1445 |
| 0.4027 | 388.0 | 1552 | 1.1504 |
| 0.4027 | 389.0 | 1556 | 1.1461 |
| 0.4027 | 390.0 | 1560 | 1.1450 |
| 0.4027 | 391.0 | 1564 | 1.1428 |
| 0.4027 | 392.0 | 1568 | 1.1392 |
| 0.4027 | 393.0 | 1572 | 1.1304 |
| 0.4027 | 394.0 | 1576 | 1.1038 |
| 0.4027 | 395.0 | 1580 | 1.0931 |
| 0.4027 | 396.0 | 1584 | 1.0837 |
| 0.4027 | 397.0 | 1588 | 1.0824 |
| 0.4027 | 398.0 | 1592 | 1.0808 |
| 0.4027 | 399.0 | 1596 | 1.0819 |
| 0.4027 | 400.0 | 1600 | 1.0794 |
| 0.4027 | 401.0 | 1604 | 1.0887 |
| 0.4027 | 402.0 | 1608 | 1.0771 |
| 0.4027 | 403.0 | 1612 | 1.1094 |
| 0.4027 | 404.0 | 1616 | 1.1436 |
| 0.4027 | 405.0 | 1620 | 1.1654 |
| 0.4027 | 406.0 | 1624 | 1.1661 |
| 0.4027 | 407.0 | 1628 | 1.1561 |
| 0.4027 | 408.0 | 1632 | 1.1425 |
| 0.4027 | 409.0 | 1636 | 1.1329 |
| 0.4027 | 410.0 | 1640 | 1.1031 |
| 0.4027 | 411.0 | 1644 | 1.0969 |
| 0.4027 | 412.0 | 1648 | 1.1374 |
| 0.4027 | 413.0 | 1652 | 1.2151 |
| 0.4027 | 414.0 | 1656 | 1.2531 |
| 0.4027 | 415.0 | 1660 | 1.2576 |
| 0.4027 | 416.0 | 1664 | 1.2520 |
| 0.4027 | 417.0 | 1668 | 1.2261 |
| 0.4027 | 418.0 | 1672 | 1.1952 |
| 0.4027 | 419.0 | 1676 | 1.1627 |
| 0.4027 | 420.0 | 1680 | 1.1412 |
| 0.4027 | 421.0 | 1684 | 1.1316 |
| 0.4027 | 422.0 | 1688 | 1.1335 |
| 0.4027 | 423.0 | 1692 | 1.1366 |
| 0.4027 | 424.0 | 1696 | 1.1405 |
| 0.4027 | 425.0 | 1700 | 1.1503 |
| 0.4027 | 426.0 | 1704 | 1.1579 |
| 0.4027 | 427.0 | 1708 | 1.1629 |
| 0.4027 | 428.0 | 1712 | 1.1647 |
| 0.4027 | 429.0 | 1716 | 1.1752 |
| 0.4027 | 430.0 | 1720 | 1.2149 |
| 0.4027 | 431.0 | 1724 | 1.2361 |
| 0.4027 | 432.0 | 1728 | 1.2406 |
| 0.4027 | 433.0 | 1732 | 1.2271 |
| 0.4027 | 434.0 | 1736 | 1.2130 |
| 0.4027 | 435.0 | 1740 | 1.2011 |
| 0.4027 | 436.0 | 1744 | 1.1930 |
| 0.4027 | 437.0 | 1748 | 1.1895 |
| 0.4027 | 438.0 | 1752 | 1.1903 |
| 0.4027 | 439.0 | 1756 | 1.1907 |
| 0.4027 | 440.0 | 1760 | 1.1871 |
| 0.4027 | 441.0 | 1764 | 1.1850 |
| 0.4027 | 442.0 | 1768 | 1.1835 |
| 0.4027 | 443.0 | 1772 | 1.1841 |
| 0.4027 | 444.0 | 1776 | 1.1790 |
| 0.4027 | 445.0 | 1780 | 1.1860 |
| 0.4027 | 446.0 | 1784 | 1.1998 |
| 0.4027 | 447.0 | 1788 | 1.2106 |
| 0.4027 | 448.0 | 1792 | 1.2091 |
| 0.4027 | 449.0 | 1796 | 1.2059 |
| 0.4027 | 450.0 | 1800 | 1.2032 |
| 0.4027 | 451.0 | 1804 | 1.2225 |
| 0.4027 | 452.0 | 1808 | 1.2336 |
| 0.4027 | 453.0 | 1812 | 1.2409 |
| 0.4027 | 454.0 | 1816 | 1.2450 |
| 0.4027 | 455.0 | 1820 | 1.2479 |
| 0.4027 | 456.0 | 1824 | 1.2373 |
| 0.4027 | 457.0 | 1828 | 1.2258 |
| 0.4027 | 458.0 | 1832 | 1.2178 |
| 0.4027 | 459.0 | 1836 | 1.2142 |
| 0.4027 | 460.0 | 1840 | 1.2237 |
| 0.4027 | 461.0 | 1844 | 1.2365 |
| 0.4027 | 462.0 | 1848 | 1.2448 |
| 0.4027 | 463.0 | 1852 | 1.2462 |
| 0.4027 | 464.0 | 1856 | 1.2458 |
| 0.4027 | 465.0 | 1860 | 1.2426 |
| 0.4027 | 466.0 | 1864 | 1.2366 |
| 0.4027 | 467.0 | 1868 | 1.2280 |
| 0.4027 | 468.0 | 1872 | 1.2097 |
| 0.4027 | 469.0 | 1876 | 1.1996 |
| 0.4027 | 470.0 | 1880 | 1.1970 |
| 0.4027 | 471.0 | 1884 | 1.1946 |
| 0.4027 | 472.0 | 1888 | 1.1921 |
| 0.4027 | 473.0 | 1892 | 1.1885 |
| 0.4027 | 474.0 | 1896 | 1.1959 |
| 0.4027 | 475.0 | 1900 | 1.2028 |
| 0.4027 | 476.0 | 1904 | 1.2091 |
| 0.4027 | 477.0 | 1908 | 1.2131 |
| 0.4027 | 478.0 | 1912 | 1.2149 |
| 0.4027 | 479.0 | 1916 | 1.2142 |
| 0.4027 | 480.0 | 1920 | 1.2106 |
| 0.4027 | 481.0 | 1924 | 1.2185 |
| 0.4027 | 482.0 | 1928 | 1.2249 |
| 0.4027 | 483.0 | 1932 | 1.2221 |
| 0.4027 | 484.0 | 1936 | 1.2240 |
| 0.4027 | 485.0 | 1940 | 1.2291 |
| 0.4027 | 486.0 | 1944 | 1.2215 |
| 0.4027 | 487.0 | 1948 | 1.2306 |
| 0.4027 | 488.0 | 1952 | 1.2364 |
| 0.4027 | 489.0 | 1956 | 1.2394 |
| 0.4027 | 490.0 | 1960 | 1.2425 |
| 0.4027 | 491.0 | 1964 | 1.2441 |
| 0.4027 | 492.0 | 1968 | 1.2484 |
| 0.4027 | 493.0 | 1972 | 1.2533 |
| 0.4027 | 494.0 | 1976 | 1.2587 |
| 0.4027 | 495.0 | 1980 | 1.2861 |
| 0.4027 | 496.0 | 1984 | 1.3230 |
| 0.4027 | 497.0 | 1988 | 1.3310 |
| 0.4027 | 498.0 | 1992 | 1.3040 |
| 0.4027 | 499.0 | 1996 | 1.2828 |
| 0.4015 | 500.0 | 2000 | 1.2658 |
| 0.4015 | 501.0 | 2004 | 1.2563 |
| 0.4015 | 502.0 | 2008 | 1.2468 |
| 0.4015 | 503.0 | 2012 | 1.2381 |
| 0.4015 | 504.0 | 2016 | 1.2305 |
| 0.4015 | 505.0 | 2020 | 1.2271 |
| 0.4015 | 506.0 | 2024 | 1.2447 |
| 0.4015 | 507.0 | 2028 | 1.2642 |
| 0.4015 | 508.0 | 2032 | 1.2743 |
| 0.4015 | 509.0 | 2036 | 1.2797 |
| 0.4015 | 510.0 | 2040 | 1.2839 |
| 0.4015 | 511.0 | 2044 | 1.2645 |
| 0.4015 | 512.0 | 2048 | 1.2411 |
| 0.4015 | 513.0 | 2052 | 1.2261 |
| 0.4015 | 514.0 | 2056 | 1.2141 |
| 0.4015 | 515.0 | 2060 | 1.2026 |
| 0.4015 | 516.0 | 2064 | 1.1991 |
| 0.4015 | 517.0 | 2068 | 1.2004 |
| 0.4015 | 518.0 | 2072 | 1.1927 |
| 0.4015 | 519.0 | 2076 | 1.2065 |
| 0.4015 | 520.0 | 2080 | 1.1876 |
| 0.4015 | 521.0 | 2084 | 1.1670 |
| 0.4015 | 522.0 | 2088 | 1.2298 |
| 0.4015 | 523.0 | 2092 | 1.2412 |
| 0.4015 | 524.0 | 2096 | 1.2469 |
| 0.4015 | 525.0 | 2100 | 1.2639 |
| 0.4015 | 526.0 | 2104 | 1.2845 |
| 0.4015 | 527.0 | 2108 | 1.2928 |
| 0.4015 | 528.0 | 2112 | 1.2928 |
| 0.4015 | 529.0 | 2116 | 1.2901 |
| 0.4015 | 530.0 | 2120 | 1.2863 |
| 0.4015 | 531.0 | 2124 | 1.2819 |
| 0.4015 | 532.0 | 2128 | 1.2756 |
| 0.4015 | 533.0 | 2132 | 1.2602 |
| 0.4015 | 534.0 | 2136 | 1.2220 |
| 0.4015 | 535.0 | 2140 | 1.1909 |
| 0.4015 | 536.0 | 2144 | 1.1784 |
| 0.4015 | 537.0 | 2148 | 1.1824 |
| 0.4015 | 538.0 | 2152 | 1.1839 |
| 0.4015 | 539.0 | 2156 | 1.1836 |
| 0.4015 | 540.0 | 2160 | 1.1816 |
| 0.4015 | 541.0 | 2164 | 1.1767 |
| 0.4015 | 542.0 | 2168 | 1.1693 |
| 0.4015 | 543.0 | 2172 | 1.1573 |
| 0.4015 | 544.0 | 2176 | 1.1424 |
| 0.4015 | 545.0 | 2180 | 1.1312 |
| 0.4015 | 546.0 | 2184 | 1.1262 |
| 0.4015 | 547.0 | 2188 | 1.1330 |
| 0.4015 | 548.0 | 2192 | 1.1370 |
| 0.4015 | 549.0 | 2196 | 1.1386 |
| 0.4015 | 550.0 | 2200 | 1.1450 |
| 0.4015 | 551.0 | 2204 | 1.1489 |
| 0.4015 | 552.0 | 2208 | 1.1465 |
| 0.4015 | 553.0 | 2212 | 1.1458 |
| 0.4015 | 554.0 | 2216 | 1.1438 |
| 0.4015 | 555.0 | 2220 | 1.1405 |
| 0.4015 | 556.0 | 2224 | 1.1413 |
| 0.4015 | 557.0 | 2228 | 1.1443 |
| 0.4015 | 558.0 | 2232 | 1.1478 |
| 0.4015 | 559.0 | 2236 | 1.1519 |
| 0.4015 | 560.0 | 2240 | 1.1579 |
| 0.4015 | 561.0 | 2244 | 1.1543 |
| 0.4015 | 562.0 | 2248 | 1.1479 |
| 0.4015 | 563.0 | 2252 | 1.1474 |
| 0.4015 | 564.0 | 2256 | 1.1388 |
| 0.4015 | 565.0 | 2260 | 1.1312 |
| 0.4015 | 566.0 | 2264 | 1.1319 |
| 0.4015 | 567.0 | 2268 | 1.1345 |
| 0.4015 | 568.0 | 2272 | 1.1379 |
| 0.4015 | 569.0 | 2276 | 1.1343 |
| 0.4015 | 570.0 | 2280 | 1.1312 |
| 0.4015 | 571.0 | 2284 | 1.1294 |
| 0.4015 | 572.0 | 2288 | 1.1286 |
| 0.4015 | 573.0 | 2292 | 1.1313 |
| 0.4015 | 574.0 | 2296 | 1.1344 |
| 0.4015 | 575.0 | 2300 | 1.1408 |
| 0.4015 | 576.0 | 2304 | 1.1502 |
| 0.4015 | 577.0 | 2308 | 1.1605 |
| 0.4015 | 578.0 | 2312 | 1.1661 |
| 0.4015 | 579.0 | 2316 | 1.1772 |
| 0.4015 | 580.0 | 2320 | 1.1835 |
| 0.4015 | 581.0 | 2324 | 1.1882 |
| 0.4015 | 582.0 | 2328 | 1.1931 |
| 0.4015 | 583.0 | 2332 | 1.1966 |
| 0.4015 | 584.0 | 2336 | 1.1995 |
| 0.4015 | 585.0 | 2340 | 1.1999 |
| 0.4015 | 586.0 | 2344 | 1.1976 |
| 0.4015 | 587.0 | 2348 | 1.2158 |
| 0.4015 | 588.0 | 2352 | 1.2351 |
| 0.4015 | 589.0 | 2356 | 1.2386 |
| 0.4015 | 590.0 | 2360 | 1.2322 |
| 0.4015 | 591.0 | 2364 | 1.2268 |
| 0.4015 | 592.0 | 2368 | 1.2168 |
| 0.4015 | 593.0 | 2372 | 1.2058 |
| 0.4015 | 594.0 | 2376 | 1.1940 |
| 0.4015 | 595.0 | 2380 | 1.1846 |
| 0.4015 | 596.0 | 2384 | 1.1756 |
| 0.4015 | 597.0 | 2388 | 1.1728 |
| 0.4015 | 598.0 | 2392 | 1.1731 |
| 0.4015 | 599.0 | 2396 | 1.1747 |
| 0.4015 | 600.0 | 2400 | 1.1754 |
| 0.4015 | 601.0 | 2404 | 1.1738 |
| 0.4015 | 602.0 | 2408 | 1.1766 |
| 0.4015 | 603.0 | 2412 | 1.1779 |
| 0.4015 | 604.0 | 2416 | 1.1781 |
| 0.4015 | 605.0 | 2420 | 1.1755 |
| 0.4015 | 606.0 | 2424 | 1.1726 |
| 0.4015 | 607.0 | 2428 | 1.1691 |
| 0.4015 | 608.0 | 2432 | 1.1652 |
| 0.4015 | 609.0 | 2436 | 1.1594 |
| 0.4015 | 610.0 | 2440 | 1.1497 |
| 0.4015 | 611.0 | 2444 | 1.1450 |
| 0.4015 | 612.0 | 2448 | 1.1467 |
| 0.4015 | 613.0 | 2452 | 1.1463 |
| 0.4015 | 614.0 | 2456 | 1.1456 |
| 0.4015 | 615.0 | 2460 | 1.1613 |
| 0.4015 | 616.0 | 2464 | 1.1746 |
| 0.4015 | 617.0 | 2468 | 1.1846 |
| 0.4015 | 618.0 | 2472 | 1.1864 |
| 0.4015 | 619.0 | 2476 | 1.1849 |
| 0.4015 | 620.0 | 2480 | 1.1839 |
| 0.4015 | 621.0 | 2484 | 1.1802 |
| 0.4015 | 622.0 | 2488 | 1.1759 |
| 0.4015 | 623.0 | 2492 | 1.1711 |
| 0.4015 | 624.0 | 2496 | 1.1654 |
| 0.4009 | 625.0 | 2500 | 1.1607 |
| 0.4009 | 626.0 | 2504 | 1.1558 |
| 0.4009 | 627.0 | 2508 | 1.1530 |
| 0.4009 | 628.0 | 2512 | 1.1523 |
| 0.4009 | 629.0 | 2516 | 1.1515 |
| 0.4009 | 630.0 | 2520 | 1.1477 |
| 0.4009 | 631.0 | 2524 | 1.1447 |
| 0.4009 | 632.0 | 2528 | 1.1449 |
| 0.4009 | 633.0 | 2532 | 1.1450 |
| 0.4009 | 634.0 | 2536 | 1.1520 |
| 0.4009 | 635.0 | 2540 | 1.1594 |
| 0.4009 | 636.0 | 2544 | 1.1627 |
| 0.4009 | 637.0 | 2548 | 1.1648 |
| 0.4009 | 638.0 | 2552 | 1.1668 |
| 0.4009 | 639.0 | 2556 | 1.1679 |
| 0.4009 | 640.0 | 2560 | 1.1674 |
| 0.4009 | 641.0 | 2564 | 1.1629 |
| 0.4009 | 642.0 | 2568 | 1.1590 |
| 0.4009 | 643.0 | 2572 | 1.1572 |
| 0.4009 | 644.0 | 2576 | 1.1574 |
| 0.4009 | 645.0 | 2580 | 1.1560 |
| 0.4009 | 646.0 | 2584 | 1.1547 |
| 0.4009 | 647.0 | 2588 | 1.1626 |
| 0.4009 | 648.0 | 2592 | 1.1698 |
| 0.4009 | 649.0 | 2596 | 1.1810 |
| 0.4009 | 650.0 | 2600 | 1.1890 |
| 0.4009 | 651.0 | 2604 | 1.1906 |
| 0.4009 | 652.0 | 2608 | 1.1845 |
| 0.4009 | 653.0 | 2612 | 1.1802 |
| 0.4009 | 654.0 | 2616 | 1.1777 |
| 0.4009 | 655.0 | 2620 | 1.1755 |
| 0.4009 | 656.0 | 2624 | 1.1743 |
| 0.4009 | 657.0 | 2628 | 1.1838 |
| 0.4009 | 658.0 | 2632 | 1.1907 |
| 0.4009 | 659.0 | 2636 | 1.1953 |
| 0.4009 | 660.0 | 2640 | 1.2169 |
| 0.4009 | 661.0 | 2644 | 1.2343 |
| 0.4009 | 662.0 | 2648 | 1.2517 |
| 0.4009 | 663.0 | 2652 | 1.2641 |
| 0.4009 | 664.0 | 2656 | 1.2559 |
| 0.4009 | 665.0 | 2660 | 1.2292 |
| 0.4009 | 666.0 | 2664 | 1.2040 |
| 0.4009 | 667.0 | 2668 | 1.1851 |
| 0.4009 | 668.0 | 2672 | 1.1710 |
| 0.4009 | 669.0 | 2676 | 1.1577 |
| 0.4009 | 670.0 | 2680 | 1.1502 |
| 0.4009 | 671.0 | 2684 | 1.1591 |
| 0.4009 | 672.0 | 2688 | 1.1709 |
| 0.4009 | 673.0 | 2692 | 1.1813 |
| 0.4009 | 674.0 | 2696 | 1.1893 |
| 0.4009 | 675.0 | 2700 | 1.1942 |
| 0.4009 | 676.0 | 2704 | 1.1949 |
| 0.4009 | 677.0 | 2708 | 1.1814 |
| 0.4009 | 678.0 | 2712 | 1.1825 |
| 0.4009 | 679.0 | 2716 | 1.1880 |
| 0.4009 | 680.0 | 2720 | 1.1829 |
| 0.4009 | 681.0 | 2724 | 1.1667 |
| 0.4009 | 682.0 | 2728 | 1.1637 |
| 0.4009 | 683.0 | 2732 | 1.1631 |
| 0.4009 | 684.0 | 2736 | 1.1605 |
| 0.4009 | 685.0 | 2740 | 1.1599 |
| 0.4009 | 686.0 | 2744 | 1.1571 |
| 0.4009 | 687.0 | 2748 | 1.1528 |
| 0.4009 | 688.0 | 2752 | 1.1541 |
| 0.4009 | 689.0 | 2756 | 1.1628 |
| 0.4009 | 690.0 | 2760 | 1.1750 |
| 0.4009 | 691.0 | 2764 | 1.1855 |
| 0.4009 | 692.0 | 2768 | 1.1928 |
| 0.4009 | 693.0 | 2772 | 1.1962 |
| 0.4009 | 694.0 | 2776 | 1.1970 |
| 0.4009 | 695.0 | 2780 | 1.1976 |
| 0.4009 | 696.0 | 2784 | 1.1929 |
| 0.4009 | 697.0 | 2788 | 1.1959 |
| 0.4009 | 698.0 | 2792 | 1.2003 |
| 0.4009 | 699.0 | 2796 | 1.2046 |
| 0.4009 | 700.0 | 2800 | 1.2084 |
| 0.4009 | 701.0 | 2804 | 1.2097 |
| 0.4009 | 702.0 | 2808 | 1.2109 |
| 0.4009 | 703.0 | 2812 | 1.2124 |
| 0.4009 | 704.0 | 2816 | 1.2159 |
| 0.4009 | 705.0 | 2820 | 1.2190 |
| 0.4009 | 706.0 | 2824 | 1.2203 |
| 0.4009 | 707.0 | 2828 | 1.2186 |
| 0.4009 | 708.0 | 2832 | 1.2156 |
| 0.4009 | 709.0 | 2836 | 1.2086 |
| 0.4009 | 710.0 | 2840 | 1.2024 |
| 0.4009 | 711.0 | 2844 | 1.1998 |
| 0.4009 | 712.0 | 2848 | 1.1986 |
| 0.4009 | 713.0 | 2852 | 1.1981 |
| 0.4009 | 714.0 | 2856 | 1.2001 |
| 0.4009 | 715.0 | 2860 | 1.2019 |
| 0.4009 | 716.0 | 2864 | 1.2038 |
| 0.4009 | 717.0 | 2868 | 1.2051 |
| 0.4009 | 718.0 | 2872 | 1.1869 |
| 0.4009 | 719.0 | 2876 | 1.1780 |
| 0.4009 | 720.0 | 2880 | 1.1821 |
| 0.4009 | 721.0 | 2884 | 1.1875 |
| 0.4009 | 722.0 | 2888 | 1.1881 |
| 0.4009 | 723.0 | 2892 | 1.1867 |
| 0.4009 | 724.0 | 2896 | 1.1862 |
| 0.4009 | 725.0 | 2900 | 1.1858 |
| 0.4009 | 726.0 | 2904 | 1.1841 |
| 0.4009 | 727.0 | 2908 | 1.1803 |
| 0.4009 | 728.0 | 2912 | 1.1781 |
| 0.4009 | 729.0 | 2916 | 1.1751 |
| 0.4009 | 730.0 | 2920 | 1.1735 |
| 0.4009 | 731.0 | 2924 | 1.1709 |
| 0.4009 | 732.0 | 2928 | 1.1676 |
| 0.4009 | 733.0 | 2932 | 1.1643 |
| 0.4009 | 734.0 | 2936 | 1.1640 |
| 0.4009 | 735.0 | 2940 | 1.1636 |
| 0.4009 | 736.0 | 2944 | 1.1596 |
| 0.4009 | 737.0 | 2948 | 1.1704 |
| 0.4009 | 738.0 | 2952 | 1.1773 |
| 0.4009 | 739.0 | 2956 | 1.1814 |
| 0.4009 | 740.0 | 2960 | 1.1891 |
| 0.4009 | 741.0 | 2964 | 1.1954 |
| 0.4009 | 742.0 | 2968 | 1.2006 |
| 0.4009 | 743.0 | 2972 | 1.1996 |
| 0.4009 | 744.0 | 2976 | 1.1986 |
| 0.4009 | 745.0 | 2980 | 1.1979 |
| 0.4009 | 746.0 | 2984 | 1.1958 |
| 0.4009 | 747.0 | 2988 | 1.1947 |
| 0.4009 | 748.0 | 2992 | 1.1930 |
| 0.4009 | 749.0 | 2996 | 1.1894 |
| 0.4006 | 750.0 | 3000 | 1.1871 |
| 0.4006 | 751.0 | 3004 | 1.1853 |
| 0.4006 | 752.0 | 3008 | 1.1854 |
| 0.4006 | 753.0 | 3012 | 1.1866 |
| 0.4006 | 754.0 | 3016 | 1.1901 |
| 0.4006 | 755.0 | 3020 | 1.1924 |
| 0.4006 | 756.0 | 3024 | 1.1946 |
| 0.4006 | 757.0 | 3028 | 1.2176 |
| 0.4006 | 758.0 | 3032 | 1.2392 |
| 0.4006 | 759.0 | 3036 | 1.2502 |
| 0.4006 | 760.0 | 3040 | 1.2617 |
| 0.4006 | 761.0 | 3044 | 1.2924 |
| 0.4006 | 762.0 | 3048 | 1.3111 |
| 0.4006 | 763.0 | 3052 | 1.3042 |
| 0.4006 | 764.0 | 3056 | 1.2828 |
| 0.4006 | 765.0 | 3060 | 1.2628 |
| 0.4006 | 766.0 | 3064 | 1.2553 |
| 0.4006 | 767.0 | 3068 | 1.2600 |
| 0.4006 | 768.0 | 3072 | 1.2645 |
| 0.4006 | 769.0 | 3076 | 1.2678 |
| 0.4006 | 770.0 | 3080 | 1.2706 |
| 0.4006 | 771.0 | 3084 | 1.2620 |
| 0.4006 | 772.0 | 3088 | 1.2547 |
| 0.4006 | 773.0 | 3092 | 1.2503 |
| 0.4006 | 774.0 | 3096 | 1.2459 |
| 0.4006 | 775.0 | 3100 | 1.2452 |
| 0.4006 | 776.0 | 3104 | 1.2442 |
| 0.4006 | 777.0 | 3108 | 1.2393 |
| 0.4006 | 778.0 | 3112 | 1.2328 |
| 0.4006 | 779.0 | 3116 | 1.2249 |
| 0.4006 | 780.0 | 3120 | 1.2223 |
| 0.4006 | 781.0 | 3124 | 1.2302 |
| 0.4006 | 782.0 | 3128 | 1.2334 |
| 0.4006 | 783.0 | 3132 | 1.2332 |
| 0.4006 | 784.0 | 3136 | 1.2326 |
| 0.4006 | 785.0 | 3140 | 1.2330 |
| 0.4006 | 786.0 | 3144 | 1.2281 |
| 0.4006 | 787.0 | 3148 | 1.2294 |
| 0.4006 | 788.0 | 3152 | 1.2327 |
| 0.4006 | 789.0 | 3156 | 1.2408 |
| 0.4006 | 790.0 | 3160 | 1.2459 |
| 0.4006 | 791.0 | 3164 | 1.2488 |
| 0.4006 | 792.0 | 3168 | 1.2509 |
| 0.4006 | 793.0 | 3172 | 1.2510 |
| 0.4006 | 794.0 | 3176 | 1.2514 |
| 0.4006 | 795.0 | 3180 | 1.2491 |
| 0.4006 | 796.0 | 3184 | 1.2476 |
| 0.4006 | 797.0 | 3188 | 1.2470 |
| 0.4006 | 798.0 | 3192 | 1.2470 |
| 0.4006 | 799.0 | 3196 | 1.2464 |
| 0.4006 | 800.0 | 3200 | 1.2468 |
| 0.4006 | 801.0 | 3204 | 1.2460 |
| 0.4006 | 802.0 | 3208 | 1.2425 |
| 0.4006 | 803.0 | 3212 | 1.2415 |
| 0.4006 | 804.0 | 3216 | 1.2416 |
| 0.4006 | 805.0 | 3220 | 1.2420 |
| 0.4006 | 806.0 | 3224 | 1.2442 |
| 0.4006 | 807.0 | 3228 | 1.2465 |
| 0.4006 | 808.0 | 3232 | 1.2481 |
| 0.4006 | 809.0 | 3236 | 1.2477 |
| 0.4006 | 810.0 | 3240 | 1.2468 |
| 0.4006 | 811.0 | 3244 | 1.2467 |
| 0.4006 | 812.0 | 3248 | 1.2471 |
| 0.4006 | 813.0 | 3252 | 1.2486 |
| 0.4006 | 814.0 | 3256 | 1.2484 |
| 0.4006 | 815.0 | 3260 | 1.2484 |
| 0.4006 | 816.0 | 3264 | 1.2477 |
| 0.4006 | 817.0 | 3268 | 1.2545 |
| 0.4006 | 818.0 | 3272 | 1.2622 |
| 0.4006 | 819.0 | 3276 | 1.2672 |
| 0.4006 | 820.0 | 3280 | 1.2704 |
| 0.4006 | 821.0 | 3284 | 1.2719 |
| 0.4006 | 822.0 | 3288 | 1.2710 |
| 0.4006 | 823.0 | 3292 | 1.2697 |
| 0.4006 | 824.0 | 3296 | 1.2671 |
| 0.4006 | 825.0 | 3300 | 1.2717 |
| 0.4006 | 826.0 | 3304 | 1.2763 |
| 0.4006 | 827.0 | 3308 | 1.2774 |
| 0.4006 | 828.0 | 3312 | 1.2773 |
| 0.4006 | 829.0 | 3316 | 1.2765 |
| 0.4006 | 830.0 | 3320 | 1.2767 |
| 0.4006 | 831.0 | 3324 | 1.2760 |
| 0.4006 | 832.0 | 3328 | 1.2755 |
| 0.4006 | 833.0 | 3332 | 1.2742 |
| 0.4006 | 834.0 | 3336 | 1.2732 |
| 0.4006 | 835.0 | 3340 | 1.2681 |
| 0.4006 | 836.0 | 3344 | 1.2624 |
| 0.4006 | 837.0 | 3348 | 1.2577 |
| 0.4006 | 838.0 | 3352 | 1.2530 |
| 0.4006 | 839.0 | 3356 | 1.2488 |
| 0.4006 | 840.0 | 3360 | 1.2455 |
| 0.4006 | 841.0 | 3364 | 1.2440 |
| 0.4006 | 842.0 | 3368 | 1.2459 |
| 0.4006 | 843.0 | 3372 | 1.2487 |
| 0.4006 | 844.0 | 3376 | 1.2498 |
| 0.4006 | 845.0 | 3380 | 1.2504 |
| 0.4006 | 846.0 | 3384 | 1.2476 |
| 0.4006 | 847.0 | 3388 | 1.2446 |
| 0.4006 | 848.0 | 3392 | 1.2400 |
| 0.4006 | 849.0 | 3396 | 1.2353 |
| 0.4006 | 850.0 | 3400 | 1.2298 |
| 0.4006 | 851.0 | 3404 | 1.2246 |
| 0.4006 | 852.0 | 3408 | 1.2207 |
| 0.4006 | 853.0 | 3412 | 1.2129 |
| 0.4006 | 854.0 | 3416 | 1.2030 |
| 0.4006 | 855.0 | 3420 | 1.1937 |
| 0.4006 | 856.0 | 3424 | 1.1898 |
| 0.4006 | 857.0 | 3428 | 1.1907 |
| 0.4006 | 858.0 | 3432 | 1.1910 |
| 0.4006 | 859.0 | 3436 | 1.1919 |
| 0.4006 | 860.0 | 3440 | 1.1920 |
| 0.4006 | 861.0 | 3444 | 1.1923 |
| 0.4006 | 862.0 | 3448 | 1.1927 |
| 0.4006 | 863.0 | 3452 | 1.1933 |
| 0.4006 | 864.0 | 3456 | 1.1934 |
| 0.4006 | 865.0 | 3460 | 1.1937 |
| 0.4006 | 866.0 | 3464 | 1.1936 |
| 0.4006 | 867.0 | 3468 | 1.1932 |
| 0.4006 | 868.0 | 3472 | 1.1926 |
| 0.4006 | 869.0 | 3476 | 1.1917 |
| 0.4006 | 870.0 | 3480 | 1.1899 |
| 0.4006 | 871.0 | 3484 | 1.1884 |
| 0.4006 | 872.0 | 3488 | 1.1858 |
| 0.4006 | 873.0 | 3492 | 1.1842 |
| 0.4006 | 874.0 | 3496 | 1.1835 |
| 0.4 | 875.0 | 3500 | 1.1836 |
| 0.4 | 876.0 | 3504 | 1.1845 |
| 0.4 | 877.0 | 3508 | 1.1867 |
| 0.4 | 878.0 | 3512 | 1.1902 |
| 0.4 | 879.0 | 3516 | 1.1945 |
| 0.4 | 880.0 | 3520 | 1.1972 |
| 0.4 | 881.0 | 3524 | 1.1996 |
| 0.4 | 882.0 | 3528 | 1.2025 |
| 0.4 | 883.0 | 3532 | 1.2048 |
| 0.4 | 884.0 | 3536 | 1.2061 |
| 0.4 | 885.0 | 3540 | 1.2076 |
| 0.4 | 886.0 | 3544 | 1.2078 |
| 0.4 | 887.0 | 3548 | 1.2093 |
| 0.4 | 888.0 | 3552 | 1.2160 |
| 0.4 | 889.0 | 3556 | 1.2185 |
| 0.4 | 890.0 | 3560 | 1.2167 |
| 0.4 | 891.0 | 3564 | 1.2196 |
| 0.4 | 892.0 | 3568 | 1.2207 |
| 0.4 | 893.0 | 3572 | 1.2203 |
| 0.4 | 894.0 | 3576 | 1.2191 |
| 0.4 | 895.0 | 3580 | 1.2181 |
| 0.4 | 896.0 | 3584 | 1.2176 |
| 0.4 | 897.0 | 3588 | 1.2169 |
| 0.4 | 898.0 | 3592 | 1.2157 |
| 0.4 | 899.0 | 3596 | 1.2177 |
| 0.4 | 900.0 | 3600 | 1.2208 |
| 0.4 | 901.0 | 3604 | 1.2232 |
| 0.4 | 902.0 | 3608 | 1.2245 |
| 0.4 | 903.0 | 3612 | 1.2242 |
| 0.4 | 904.0 | 3616 | 1.2231 |
| 0.4 | 905.0 | 3620 | 1.2219 |
| 0.4 | 906.0 | 3624 | 1.2211 |
| 0.4 | 907.0 | 3628 | 1.2215 |
| 0.4 | 908.0 | 3632 | 1.2216 |
| 0.4 | 909.0 | 3636 | 1.2204 |
| 0.4 | 910.0 | 3640 | 1.2193 |
| 0.4 | 911.0 | 3644 | 1.2182 |
| 0.4 | 912.0 | 3648 | 1.2165 |
| 0.4 | 913.0 | 3652 | 1.2148 |
| 0.4 | 914.0 | 3656 | 1.2128 |
| 0.4 | 915.0 | 3660 | 1.2120 |
| 0.4 | 916.0 | 3664 | 1.2113 |
| 0.4 | 917.0 | 3668 | 1.2111 |
| 0.4 | 918.0 | 3672 | 1.2114 |
| 0.4 | 919.0 | 3676 | 1.2117 |
| 0.4 | 920.0 | 3680 | 1.2108 |
| 0.4 | 921.0 | 3684 | 1.2107 |
| 0.4 | 922.0 | 3688 | 1.2097 |
| 0.4 | 923.0 | 3692 | 1.2084 |
| 0.4 | 924.0 | 3696 | 1.2072 |
| 0.4 | 925.0 | 3700 | 1.2063 |
| 0.4 | 926.0 | 3704 | 1.2060 |
| 0.4 | 927.0 | 3708 | 1.2055 |
| 0.4 | 928.0 | 3712 | 1.2053 |
| 0.4 | 929.0 | 3716 | 1.2053 |
| 0.4 | 930.0 | 3720 | 1.2055 |
| 0.4 | 931.0 | 3724 | 1.2061 |
| 0.4 | 932.0 | 3728 | 1.2091 |
| 0.4 | 933.0 | 3732 | 1.2121 |
| 0.4 | 934.0 | 3736 | 1.2141 |
| 0.4 | 935.0 | 3740 | 1.2150 |
| 0.4 | 936.0 | 3744 | 1.2152 |
| 0.4 | 937.0 | 3748 | 1.2153 |
| 0.4 | 938.0 | 3752 | 1.2153 |
| 0.4 | 939.0 | 3756 | 1.2150 |
| 0.4 | 940.0 | 3760 | 1.2153 |
| 0.4 | 941.0 | 3764 | 1.2154 |
| 0.4 | 942.0 | 3768 | 1.2156 |
| 0.4 | 943.0 | 3772 | 1.2156 |
| 0.4 | 944.0 | 3776 | 1.2144 |
| 0.4 | 945.0 | 3780 | 1.2107 |
| 0.4 | 946.0 | 3784 | 1.2078 |
| 0.4 | 947.0 | 3788 | 1.2060 |
| 0.4 | 948.0 | 3792 | 1.2047 |
| 0.4 | 949.0 | 3796 | 1.2026 |
| 0.4 | 950.0 | 3800 | 1.2003 |
| 0.4 | 951.0 | 3804 | 1.1986 |
| 0.4 | 952.0 | 3808 | 1.1975 |
| 0.4 | 953.0 | 3812 | 1.1969 |
| 0.4 | 954.0 | 3816 | 1.1958 |
| 0.4 | 955.0 | 3820 | 1.1946 |
| 0.4 | 956.0 | 3824 | 1.1937 |
| 0.4 | 957.0 | 3828 | 1.1928 |
| 0.4 | 958.0 | 3832 | 1.1928 |
| 0.4 | 959.0 | 3836 | 1.1928 |
| 0.4 | 960.0 | 3840 | 1.1933 |
| 0.4 | 961.0 | 3844 | 1.1939 |
| 0.4 | 962.0 | 3848 | 1.1942 |
| 0.4 | 963.0 | 3852 | 1.1947 |
| 0.4 | 964.0 | 3856 | 1.1954 |
| 0.4 | 965.0 | 3860 | 1.1961 |
| 0.4 | 966.0 | 3864 | 1.1966 |
| 0.4 | 967.0 | 3868 | 1.1985 |
| 0.4 | 968.0 | 3872 | 1.2002 |
| 0.4 | 969.0 | 3876 | 1.2015 |
| 0.4 | 970.0 | 3880 | 1.2035 |
| 0.4 | 971.0 | 3884 | 1.2047 |
| 0.4 | 972.0 | 3888 | 1.2050 |
| 0.4 | 973.0 | 3892 | 1.2057 |
| 0.4 | 974.0 | 3896 | 1.2064 |
| 0.4 | 975.0 | 3900 | 1.2068 |
| 0.4 | 976.0 | 3904 | 1.2067 |
| 0.4 | 977.0 | 3908 | 1.2067 |
| 0.4 | 978.0 | 3912 | 1.2065 |
| 0.4 | 979.0 | 3916 | 1.2063 |
| 0.4 | 980.0 | 3920 | 1.2060 |
| 0.4 | 981.0 | 3924 | 1.2059 |
| 0.4 | 982.0 | 3928 | 1.2059 |
| 0.4 | 983.0 | 3932 | 1.2059 |
| 0.4 | 984.0 | 3936 | 1.2060 |
| 0.4 | 985.0 | 3940 | 1.2060 |
| 0.4 | 986.0 | 3944 | 1.2059 |
| 0.4 | 987.0 | 3948 | 1.2059 |
| 0.4 | 988.0 | 3952 | 1.2059 |
| 0.4 | 989.0 | 3956 | 1.2059 |
| 0.4 | 990.0 | 3960 | 1.2059 |
| 0.4 | 991.0 | 3964 | 1.2060 |
| 0.4 | 992.0 | 3968 | 1.2060 |
| 0.4 | 993.0 | 3972 | 1.2060 |
| 0.4 | 994.0 | 3976 | 1.2054 |
| 0.4 | 995.0 | 3980 | 1.2047 |
| 0.4 | 996.0 | 3984 | 1.2043 |
| 0.4 | 997.0 | 3988 | 1.2041 |
| 0.4 | 998.0 | 3992 | 1.2040 |
| 0.4 | 999.0 | 3996 | 1.2039 |
| 0.4009 | 1000.0 | 4000 | 1.2040 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
MakTek/code_llama-5e-new_data
|
MakTek
| 2024-03-07T15:06:04Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:NousResearch/CodeLlama-7b-hf",
"base_model:adapter:NousResearch/CodeLlama-7b-hf",
"region:us"
] | null | 2024-03-07T15:05:55Z |
---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: NousResearch/CodeLlama-7b-hf
model-index:
- name: results_code_llama-5e-0.1_new_data
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_code_llama-5e-0.1_new_data
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
QuackyMcDuck/ppo-Huggy
|
QuackyMcDuck
| 2024-03-07T15:03:34Z | 2 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-03-07T15:03:29Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: QuackyMcDuck/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
meghanath852/donut-base-sroie
|
meghanath852
| 2024-03-07T15:02:58Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-03-07T14:37:16Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
MomoSen/distilbert-base-uncased-lora-text-classification
|
MomoSen
| 2024-03-07T15:02:32Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T15:02:28Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
metrics:
- accuracy
base_model: distilbert-base-uncased
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9904
- Accuracy: {'accuracy': 0.899}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.5709 | {'accuracy': 0.837} |
| 0.4386 | 2.0 | 500 | 0.4510 | {'accuracy': 0.871} |
| 0.4386 | 3.0 | 750 | 0.6571 | {'accuracy': 0.887} |
| 0.1891 | 4.0 | 1000 | 0.6197 | {'accuracy': 0.894} |
| 0.1891 | 5.0 | 1250 | 0.7688 | {'accuracy': 0.897} |
| 0.0683 | 6.0 | 1500 | 0.8231 | {'accuracy': 0.892} |
| 0.0683 | 7.0 | 1750 | 0.8949 | {'accuracy': 0.901} |
| 0.0136 | 8.0 | 2000 | 0.9553 | {'accuracy': 0.896} |
| 0.0136 | 9.0 | 2250 | 1.0202 | {'accuracy': 0.892} |
| 0.0067 | 10.0 | 2500 | 0.9904 | {'accuracy': 0.899} |
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.15.2
|
MomoSen/distilbert-base-uncased-lora-text-classification_a
|
MomoSen
| 2024-03-07T15:02:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T15:02:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pabloma09/output_dir
|
pabloma09
| 2024-03-07T15:02:11Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-07T13:54:27Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: output_dir
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# output_dir
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the SAMSUM (summarization) dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8411 | 0.4 | 368 | 1.8439 |
| 1.6179 | 0.8 | 736 | 1.8187 |
| 1.5641 | 1.2 | 1104 | 1.8084 |
| 2.2357 | 1.6 | 1472 | 1.8017 |
| 1.388 | 2.0 | 1840 | 1.8001 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ValouF-pimento/ControlNet_SDXL_tile_upscale
|
ValouF-pimento
| 2024-03-07T14:58:21Z | 8 | 3 |
diffusers
|
[
"diffusers",
"license:apache-2.0",
"region:us"
] | null | 2024-03-07T14:46:02Z |
---
license: apache-2.0
library_name: diffusers
---
To use it
```python
from diffusers import ControlNetModel
import torch
model = ControlNetModel.from_pretrained(
"ValouF-pimento/ControlNet_SDXL_tile_upscale",
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
```
|
ctu-aic/xlm-roberta-large-nli-csfever
|
ctu-aic
| 2024-03-07T14:57:53Z | 88 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"cs",
"dataset:ctu-aic/csfever_nli",
"arxiv:2312.10171",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T13:32:24Z |
---
datasets:
- ctu-aic/csfever_nli
language:
- cs
pipeline_tag: text-classification
---
This model is [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) finetuned on [CsFEVER-NLI](https://huggingface.co/datasets/ctu-aic/csfever_nli) dataset.
For more information, see our [Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language](https://arxiv.org/abs/2312.10171) paper.
Currently in review for [NCAA](https://link.springer.com/journal/521) journal.
```bibtex
@article{drchal2023pipeline,
title={Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language},
author={Drchal, Jan and Ullrich, Herbert and Mlyn{\'a}{\v{r}}, Tom{\'a}{\v{s}} and Moravec, V{\'a}clav},
journal={arXiv preprint arXiv:2312.10171},
year={2023}
}
```
|
saisamarth/gemma-2b-scipaper-finetune
|
saisamarth
| 2024-03-07T14:55:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T14:45:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ctu-aic/xlm-roberta-large-nli-enfever
|
ctu-aic
| 2024-03-07T14:54:25Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"dataset:ctu-aic/enfever_nli",
"arxiv:2312.10171",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T16:11:01Z |
---
datasets:
- ctu-aic/enfever_nli
language:
- en
pipeline_tag: text-classification
---
This model is [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) finetuned on [EnFEVER-NLI](https://huggingface.co/datasets/ctu-aic/enfever_nli) dataset.
For more information, see our [Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language](https://arxiv.org/abs/2312.10171) paper.
Currently in review for [NCAA](https://link.springer.com/journal/521) journal.
```bibtex
@article{drchal2023pipeline,
title={Pipeline and Dataset Generation for Automated Fact-checking in Almost Any Language},
author={Drchal, Jan and Ullrich, Herbert and Mlyn{\'a}{\v{r}}, Tom{\'a}{\v{s}} and Moravec, V{\'a}clav},
journal={arXiv preprint arXiv:2312.10171},
year={2023}
}
|
joshus/bge-base-0803
|
joshus
| 2024-03-07T14:51:46Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-07T14:51:25Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# joshus/bge-base-0803
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('joshus/bge-base-0803')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=joshus/bge-base-0803)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
divinetaco/aranea-ancilla-116b-v1.0-4.4bpw-exl2
|
divinetaco
| 2024-03-07T14:47:58Z | 10 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:152334H/miqu-1-70b-sf",
"base_model:merge:152334H/miqu-1-70b-sf",
"base_model:NeverSleep/MiquMaid-v1-70B",
"base_model:merge:NeverSleep/MiquMaid-v1-70B",
"base_model:Sao10K/WinterGoddess-1.4x-70B-L2",
"base_model:merge:Sao10K/WinterGoddess-1.4x-70B-L2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-07T08:39:30Z |
---
base_model:
- 152334H/miqu-1-70b-sf
- NeverSleep/MiquMaid-v1-70B
- Sao10K/WinterGoddess-1.4x-70B-L2
library_name: transformers
tags:
- mergekit
- merge
---
# aranea-ancilla-116b-v1.0-4.4bpw-exl2
**aka MiquMaid-v1-70B + interleaved WinterGoddess-1.4x-70B-L2**

A [mergekit](https://github.com/arcee-ai/mergekit) frankenmerge based on [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) with interleaved layers of [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
This was the top performing model from a series of merge experiments to create a highly coherant creative writing model.
Tests consisted of a series of private benchmarks and manual comparisons. A number of different base models, interleave models and layer offsets were compared.
- Usable context ~32768
- Recommended context ~16384
Non frankenstein miqu-1 finetunes generally outperform their frankenstein counterparts at very long contexts due to coherency loss.
As a rough suggestion I might suggest swapping out to either [NeverSleep/MiquMaid-v1-70B](https://huggingface.co/NeverSleep/MiquMaid-v1-70B) or [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) after 16k context.
Layers: 136
### License
No license. Component models based on the [Mistral AI Miqu-1](https://huggingface.co/miqudev/miqu-1-70b/tree/main) llama2 finetune that was released without license.
### Interesting observations from benchmarking
- 10 layer interleave stride with a 20 layer interleave width consistently outperformed alternatives combinations.
- Offsetting the interleaved model's first set of layers generally improved coherency. [14-30] reliably beat the [10-30] mergekit slice configuration for various combinations of models.
- Quality of resulting merges can vary wildly. Whilst a merge of two strong models tends to produce a strong frankenstein model, this rule does not always hold true.
### Quantizations
Exllamav2 quants will be available when bandwidth permits.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.