modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-21 18:29:58
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
569 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-21 18:29:29
card
stringlengths
11
1.01M
breezedeus/cnocr-ppocr-ch_PP-OCRv4
breezedeus
2024-11-26T14:18:32Z
101
0
null
[ "onnx", "OCR", "STD", "Chinese", "English", "Optical Character Recognition", "license:apache-2.0", "region:us" ]
null
2024-11-26T14:11:24Z
--- license: apache-2.0 tags: - OCR - STD - Chinese - English - Optical Character Recognition --- # Text Recognition Model for CnOCR CnOCR: Awesome Chinese/English OCR Python toolkits based on PyTorch. It comes with 20+ well-trained models for different application scenarios and can be used directly after installation. CnOCR:基于 PyTorch 的中文 / 英文 OCR Python 工具包。它带有 20 多个针对不同应用场景进行良好训练的模型,安装后可直接使用。 See more information: [CnOCR](https://github.com/breezedeus/CnOCR).
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold0
MayBashendy
2024-11-26T14:18:29Z
166
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T14:12:41Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k10_task2_organization_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8845 - Qwk: 0.3761 - Mse: 0.8845 - Rmse: 0.9405 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0606 | 2 | 3.4787 | 0.0015 | 3.4787 | 1.8651 | | No log | 0.1212 | 4 | 1.7789 | 0.0376 | 1.7789 | 1.3337 | | No log | 0.1818 | 6 | 0.9660 | 0.1525 | 0.9660 | 0.9829 | | No log | 0.2424 | 8 | 0.8512 | 0.1567 | 0.8512 | 0.9226 | | No log | 0.3030 | 10 | 0.9987 | 0.0883 | 0.9987 | 0.9993 | | No log | 0.3636 | 12 | 1.3826 | 0.1567 | 1.3826 | 1.1758 | | No log | 0.4242 | 14 | 1.6606 | 0.0654 | 1.6606 | 1.2886 | | No log | 0.4848 | 16 | 1.5158 | 0.1230 | 1.5158 | 1.2312 | | No log | 0.5455 | 18 | 1.2911 | 0.1394 | 1.2911 | 1.1363 | | No log | 0.6061 | 20 | 1.0920 | 0.0455 | 1.0920 | 1.0450 | | No log | 0.6667 | 22 | 1.0941 | 0.0455 | 1.0941 | 1.0460 | | No log | 0.7273 | 24 | 0.9738 | 0.0146 | 0.9738 | 0.9868 | | No log | 0.7879 | 26 | 0.8735 | 0.0348 | 0.8735 | 0.9346 | | No log | 0.8485 | 28 | 0.9888 | -0.0063 | 0.9888 | 0.9944 | | No log | 0.9091 | 30 | 1.2345 | 0.0731 | 1.2345 | 1.1111 | | No log | 0.9697 | 32 | 1.2610 | 0.0731 | 1.2610 | 1.1230 | | No log | 1.0303 | 34 | 1.3057 | 0.0296 | 1.3057 | 1.1427 | | No log | 1.0909 | 36 | 1.1829 | 0.0597 | 1.1829 | 1.0876 | | No log | 1.1515 | 38 | 1.0741 | 0.0597 | 1.0741 | 1.0364 | | No log | 1.2121 | 40 | 0.7988 | 0.1072 | 0.7988 | 0.8938 | | No log | 1.2727 | 42 | 0.7304 | 0.1664 | 0.7304 | 0.8547 | | No log | 1.3333 | 44 | 0.7790 | 0.1968 | 0.7790 | 0.8826 | | No log | 1.3939 | 46 | 0.7542 | 0.2855 | 0.7542 | 0.8685 | | No log | 1.4545 | 48 | 0.7283 | 0.1733 | 0.7283 | 0.8534 | | No log | 1.5152 | 50 | 0.8861 | 0.0645 | 0.8861 | 0.9413 | | No log | 1.5758 | 52 | 0.9275 | -0.0273 | 0.9275 | 0.9631 | | No log | 1.6364 | 54 | 0.8312 | 0.0106 | 0.8312 | 0.9117 | | No log | 1.6970 | 56 | 0.8456 | 0.1554 | 0.8456 | 0.9196 | | No log | 1.7576 | 58 | 0.8560 | 0.1567 | 0.8560 | 0.9252 | | No log | 1.8182 | 60 | 0.8427 | 0.1404 | 0.8427 | 0.9180 | | No log | 1.8788 | 62 | 0.8100 | 0.1445 | 0.8100 | 0.9000 | | No log | 1.9394 | 64 | 0.7584 | 0.1580 | 0.7584 | 0.8709 | | No log | 2.0 | 66 | 0.7400 | 0.1650 | 0.7400 | 0.8603 | | No log | 2.0606 | 68 | 0.7570 | 0.0962 | 0.7570 | 0.8701 | | No log | 2.1212 | 70 | 0.7674 | 0.1583 | 0.7674 | 0.8760 | | No log | 2.1818 | 72 | 0.7523 | 0.1596 | 0.7523 | 0.8673 | | No log | 2.2424 | 74 | 0.7400 | 0.2280 | 0.7400 | 0.8602 | | No log | 2.3030 | 76 | 0.7575 | 0.2444 | 0.7575 | 0.8703 | | No log | 2.3636 | 78 | 0.7693 | 0.2420 | 0.7693 | 0.8771 | | No log | 2.4242 | 80 | 0.7571 | 0.1939 | 0.7571 | 0.8701 | | No log | 2.4848 | 82 | 0.7377 | 0.1775 | 0.7377 | 0.8589 | | No log | 2.5455 | 84 | 0.7627 | 0.2155 | 0.7627 | 0.8733 | | No log | 2.6061 | 86 | 0.9443 | 0.1794 | 0.9443 | 0.9718 | | No log | 2.6667 | 88 | 1.0278 | 0.1640 | 1.0278 | 1.0138 | | No log | 2.7273 | 90 | 0.9986 | 0.1794 | 0.9986 | 0.9993 | | No log | 2.7879 | 92 | 0.8415 | 0.1962 | 0.8415 | 0.9173 | | No log | 2.8485 | 94 | 0.6808 | 0.2819 | 0.6808 | 0.8251 | | No log | 2.9091 | 96 | 0.6661 | 0.2689 | 0.6661 | 0.8161 | | No log | 2.9697 | 98 | 0.6626 | 0.3084 | 0.6626 | 0.8140 | | No log | 3.0303 | 100 | 0.6697 | 0.2819 | 0.6697 | 0.8183 | | No log | 3.0909 | 102 | 0.6897 | 0.3239 | 0.6897 | 0.8305 | | No log | 3.1515 | 104 | 0.7506 | 0.2448 | 0.7506 | 0.8664 | | No log | 3.2121 | 106 | 0.8196 | 0.2133 | 0.8196 | 0.9053 | | No log | 3.2727 | 108 | 0.8407 | 0.2582 | 0.8407 | 0.9169 | | No log | 3.3333 | 110 | 0.8075 | 0.2539 | 0.8075 | 0.8986 | | No log | 3.3939 | 112 | 0.8473 | 0.2398 | 0.8473 | 0.9205 | | No log | 3.4545 | 114 | 0.9741 | 0.3075 | 0.9741 | 0.9870 | | No log | 3.5152 | 116 | 1.1326 | 0.2583 | 1.1326 | 1.0642 | | No log | 3.5758 | 118 | 1.1144 | 0.2529 | 1.1144 | 1.0556 | | No log | 3.6364 | 120 | 1.0004 | 0.2927 | 1.0004 | 1.0002 | | No log | 3.6970 | 122 | 0.9945 | 0.2927 | 0.9945 | 0.9972 | | No log | 3.7576 | 124 | 1.1072 | 0.3007 | 1.1072 | 1.0522 | | No log | 3.8182 | 126 | 1.1440 | 0.1999 | 1.1440 | 1.0696 | | No log | 3.8788 | 128 | 1.0889 | 0.2220 | 1.0889 | 1.0435 | | No log | 3.9394 | 130 | 0.9451 | 0.3122 | 0.9451 | 0.9722 | | No log | 4.0 | 132 | 0.8524 | 0.2691 | 0.8524 | 0.9232 | | No log | 4.0606 | 134 | 0.7881 | 0.2550 | 0.7881 | 0.8878 | | No log | 4.1212 | 136 | 0.7575 | 0.2702 | 0.7575 | 0.8704 | | No log | 4.1818 | 138 | 0.7906 | 0.2157 | 0.7906 | 0.8892 | | No log | 4.2424 | 140 | 0.7875 | 0.2302 | 0.7875 | 0.8874 | | No log | 4.3030 | 142 | 0.7493 | 0.2157 | 0.7493 | 0.8656 | | No log | 4.3636 | 144 | 0.7239 | 0.2169 | 0.7239 | 0.8508 | | No log | 4.4242 | 146 | 0.6867 | 0.2516 | 0.6867 | 0.8287 | | No log | 4.4848 | 148 | 0.6895 | 0.3045 | 0.6895 | 0.8304 | | No log | 4.5455 | 150 | 0.7170 | 0.2583 | 0.7170 | 0.8468 | | No log | 4.6061 | 152 | 0.7740 | 0.2853 | 0.7740 | 0.8798 | | No log | 4.6667 | 154 | 0.9226 | 0.3325 | 0.9226 | 0.9605 | | No log | 4.7273 | 156 | 1.0735 | 0.3345 | 1.0735 | 1.0361 | | No log | 4.7879 | 158 | 1.0616 | 0.3378 | 1.0616 | 1.0304 | | No log | 4.8485 | 160 | 0.9163 | 0.3466 | 0.9163 | 0.9573 | | No log | 4.9091 | 162 | 0.8384 | 0.3045 | 0.8384 | 0.9156 | | No log | 4.9697 | 164 | 0.7990 | 0.2712 | 0.7990 | 0.8939 | | No log | 5.0303 | 166 | 0.7694 | 0.2712 | 0.7694 | 0.8772 | | No log | 5.0909 | 168 | 0.8134 | 0.3173 | 0.8134 | 0.9019 | | No log | 5.1515 | 170 | 0.8259 | 0.2658 | 0.8259 | 0.9088 | | No log | 5.2121 | 172 | 0.8114 | 0.2658 | 0.8114 | 0.9008 | | No log | 5.2727 | 174 | 0.7764 | 0.2983 | 0.7764 | 0.8812 | | No log | 5.3333 | 176 | 0.7964 | 0.2669 | 0.7964 | 0.8924 | | No log | 5.3939 | 178 | 0.8606 | 0.2720 | 0.8606 | 0.9277 | | No log | 5.4545 | 180 | 0.8940 | 0.3056 | 0.8940 | 0.9455 | | No log | 5.5152 | 182 | 0.8744 | 0.2658 | 0.8744 | 0.9351 | | No log | 5.5758 | 184 | 0.8170 | 0.2658 | 0.8170 | 0.9039 | | No log | 5.6364 | 186 | 0.8168 | 0.2658 | 0.8168 | 0.9037 | | No log | 5.6970 | 188 | 0.8427 | 0.3056 | 0.8427 | 0.9180 | | No log | 5.7576 | 190 | 0.9246 | 0.3374 | 0.9246 | 0.9616 | | No log | 5.8182 | 192 | 0.9456 | 0.3412 | 0.9456 | 0.9724 | | No log | 5.8788 | 194 | 0.8881 | 0.3374 | 0.8881 | 0.9424 | | No log | 5.9394 | 196 | 0.8398 | 0.3684 | 0.8398 | 0.9164 | | No log | 6.0 | 198 | 0.7755 | 0.2691 | 0.7755 | 0.8806 | | No log | 6.0606 | 200 | 0.7484 | 0.2550 | 0.7484 | 0.8651 | | No log | 6.1212 | 202 | 0.7689 | 0.2691 | 0.7689 | 0.8769 | | No log | 6.1818 | 204 | 0.8368 | 0.3103 | 0.8368 | 0.9148 | | No log | 6.2424 | 206 | 0.9035 | 0.3449 | 0.9035 | 0.9505 | | No log | 6.3030 | 208 | 0.9980 | 0.3664 | 0.9980 | 0.9990 | | No log | 6.3636 | 210 | 1.0118 | 0.3224 | 1.0118 | 1.0059 | | No log | 6.4242 | 212 | 0.9613 | 0.3761 | 0.9613 | 0.9804 | | No log | 6.4848 | 214 | 0.8568 | 0.3449 | 0.8568 | 0.9256 | | No log | 6.5455 | 216 | 0.7860 | 0.3253 | 0.7860 | 0.8866 | | No log | 6.6061 | 218 | 0.7721 | 0.3531 | 0.7721 | 0.8787 | | No log | 6.6667 | 220 | 0.8247 | 0.3714 | 0.8247 | 0.9081 | | No log | 6.7273 | 222 | 0.9336 | 0.3223 | 0.9336 | 0.9662 | | No log | 6.7879 | 224 | 1.0138 | 0.2830 | 1.0138 | 1.0069 | | No log | 6.8485 | 226 | 1.0187 | 0.2830 | 1.0187 | 1.0093 | | No log | 6.9091 | 228 | 1.0280 | 0.2830 | 1.0280 | 1.0139 | | No log | 6.9697 | 230 | 0.9785 | 0.3223 | 0.9785 | 0.9892 | | No log | 7.0303 | 232 | 0.9611 | 0.3188 | 0.9611 | 0.9804 | | No log | 7.0909 | 234 | 0.9436 | 0.3578 | 0.9436 | 0.9714 | | No log | 7.1515 | 236 | 0.9668 | 0.3578 | 0.9668 | 0.9833 | | No log | 7.2121 | 238 | 1.0428 | 0.2786 | 1.0428 | 1.0212 | | No log | 7.2727 | 240 | 1.0737 | 0.2662 | 1.0737 | 1.0362 | | No log | 7.3333 | 242 | 1.0455 | 0.2786 | 1.0455 | 1.0225 | | No log | 7.3939 | 244 | 0.9599 | 0.3578 | 0.9599 | 0.9797 | | No log | 7.4545 | 246 | 0.8834 | 0.3103 | 0.8834 | 0.9399 | | No log | 7.5152 | 248 | 0.8720 | 0.3103 | 0.8720 | 0.9338 | | No log | 7.5758 | 250 | 0.8287 | 0.3103 | 0.8287 | 0.9103 | | No log | 7.6364 | 252 | 0.8205 | 0.3103 | 0.8205 | 0.9058 | | No log | 7.6970 | 254 | 0.7990 | 0.3103 | 0.7990 | 0.8939 | | No log | 7.7576 | 256 | 0.8080 | 0.3103 | 0.8080 | 0.8989 | | No log | 7.8182 | 258 | 0.8386 | 0.3103 | 0.8386 | 0.9157 | | No log | 7.8788 | 260 | 0.8604 | 0.3449 | 0.8604 | 0.9276 | | No log | 7.9394 | 262 | 0.8689 | 0.3449 | 0.8689 | 0.9322 | | No log | 8.0 | 264 | 0.8946 | 0.3761 | 0.8946 | 0.9458 | | No log | 8.0606 | 266 | 0.9261 | 0.3578 | 0.9261 | 0.9623 | | No log | 8.1212 | 268 | 0.9562 | 0.3188 | 0.9562 | 0.9779 | | No log | 8.1818 | 270 | 0.9393 | 0.3150 | 0.9393 | 0.9692 | | No log | 8.2424 | 272 | 0.9225 | 0.3150 | 0.9225 | 0.9605 | | No log | 8.3030 | 274 | 0.8851 | 0.3548 | 0.8851 | 0.9408 | | No log | 8.3636 | 276 | 0.8741 | 0.3548 | 0.8741 | 0.9349 | | No log | 8.4242 | 278 | 0.8903 | 0.3548 | 0.8903 | 0.9436 | | No log | 8.4848 | 280 | 0.9031 | 0.3111 | 0.9031 | 0.9503 | | No log | 8.5455 | 282 | 0.9290 | 0.3150 | 0.9290 | 0.9639 | | No log | 8.6061 | 284 | 0.9363 | 0.3150 | 0.9363 | 0.9676 | | No log | 8.6667 | 286 | 0.9592 | 0.3188 | 0.9592 | 0.9794 | | No log | 8.7273 | 288 | 0.9648 | 0.3188 | 0.9648 | 0.9823 | | No log | 8.7879 | 290 | 0.9804 | 0.3223 | 0.9804 | 0.9901 | | No log | 8.8485 | 292 | 0.9959 | 0.3291 | 0.9959 | 0.9979 | | No log | 8.9091 | 294 | 1.0197 | 0.3322 | 1.0197 | 1.0098 | | No log | 8.9697 | 296 | 1.0405 | 0.2950 | 1.0405 | 1.0201 | | No log | 9.0303 | 298 | 1.0356 | 0.2950 | 1.0356 | 1.0176 | | No log | 9.0909 | 300 | 1.0159 | 0.3291 | 1.0159 | 1.0079 | | No log | 9.1515 | 302 | 1.0057 | 0.3291 | 1.0057 | 1.0029 | | No log | 9.2121 | 304 | 0.9870 | 0.3223 | 0.9870 | 0.9935 | | No log | 9.2727 | 306 | 0.9713 | 0.3150 | 0.9713 | 0.9855 | | No log | 9.3333 | 308 | 0.9476 | 0.3150 | 0.9476 | 0.9734 | | No log | 9.3939 | 310 | 0.9313 | 0.3548 | 0.9313 | 0.9650 | | No log | 9.4545 | 312 | 0.9128 | 0.3548 | 0.9128 | 0.9554 | | No log | 9.5152 | 314 | 0.8925 | 0.3761 | 0.8925 | 0.9447 | | No log | 9.5758 | 316 | 0.8776 | 0.3761 | 0.8776 | 0.9368 | | No log | 9.6364 | 318 | 0.8710 | 0.3761 | 0.8710 | 0.9333 | | No log | 9.6970 | 320 | 0.8724 | 0.3761 | 0.8724 | 0.9340 | | No log | 9.7576 | 322 | 0.8749 | 0.3761 | 0.8749 | 0.9354 | | No log | 9.8182 | 324 | 0.8805 | 0.3761 | 0.8805 | 0.9383 | | No log | 9.8788 | 326 | 0.8830 | 0.3761 | 0.8830 | 0.9397 | | No log | 9.9394 | 328 | 0.8845 | 0.3761 | 0.8845 | 0.9405 | | No log | 10.0 | 330 | 0.8845 | 0.3761 | 0.8845 | 0.9405 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
mradermacher/Crystal_1.8b-GGUF
mradermacher
2024-11-26T14:17:41Z
14
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Sakalti/Crystal_1.8b", "base_model:quantized:Sakalti/Crystal_1.8b", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T12:39:27Z
--- base_model: Sakalti/Crystal_1.8b language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/Sakalti/Crystal_1.8b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Crystal_1.8b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q2_K.gguf) | Q2_K | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q3_K_S.gguf) | Q3_K_S | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q3_K_L.gguf) | Q3_K_L | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.IQ4_XS.gguf) | IQ4_XS | 1.1 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q4_0_4_4.gguf) | Q4_0_4_4 | 1.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q5_K_S.gguf) | Q5_K_S | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q5_K_M.gguf) | Q5_K_M | 1.4 | | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q6_K.gguf) | Q6_K | 1.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Crystal_1.8b-GGUF/resolve/main/Crystal_1.8b.f16.gguf) | f16 | 3.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
platzi/platzi-vit-model-Nicolas
platzi
2024-11-26T14:17:29Z
198
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-26T14:13:20Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: platzi-vit-model-Nicolas results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Nicolas This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1528 - Accuracy: 0.9624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.0662 | 3.8462 | 500 | 0.1528 | 0.9624 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
mradermacher/cantonesellm-lihkg-story-merged-GGUF
mradermacher
2024-11-26T14:17:15Z
9
0
transformers
[ "transformers", "gguf", "en", "base_model:wcyat/cantonesellm-lihkg-story-merged", "base_model:quantized:wcyat/cantonesellm-lihkg-story-merged", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T13:42:51Z
--- base_model: wcyat/cantonesellm-lihkg-story-merged language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/wcyat/cantonesellm-lihkg-story-merged <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q2_K.gguf) | Q2_K | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q3_K_S.gguf) | Q3_K_S | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q3_K_M.gguf) | Q3_K_M | 3.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q3_K_L.gguf) | Q3_K_L | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.IQ4_XS.gguf) | IQ4_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q4_0_4_4.gguf) | Q4_0_4_4 | 3.6 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q4_K_S.gguf) | Q4_K_S | 3.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q4_K_M.gguf) | Q4_K_M | 3.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q5_K_S.gguf) | Q5_K_S | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q5_K_M.gguf) | Q5_K_M | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q6_K.gguf) | Q6_K | 5.1 | very good quality | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.Q8_0.gguf) | Q8_0 | 6.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/cantonesellm-lihkg-story-merged-GGUF/resolve/main/cantonesellm-lihkg-story-merged.f16.gguf) | f16 | 12.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
furrutiav/roberta_mixtral_nllfg_rubric_rte_tf_idf_perplexity
furrutiav
2024-11-26T14:15:41Z
105
0
transformers
[ "transformers", "safetensors", "roberta", "feature-extraction", "arxiv:1910.09700", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-11-25T16:03:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
platzi/platzi-vit-model-Jaime-Bermudez
platzi
2024-11-26T14:09:29Z
195
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-26T14:00:31Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: platzi-vit-model-Jaime-Bermudez results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Jaime-Bermudez This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0241 - Accuracy: 0.9925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1392 | 3.8462 | 500 | 0.0241 | 0.9925 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
PrunaAI/Abhishekcr448-Tiny-Hinglish-Chat-21M-bnb-8bit-smashed
PrunaAI
2024-11-26T14:08:28Z
7
0
null
[ "safetensors", "gpt2", "pruna-ai", "base_model:Abhishekcr448/Tiny-Hinglish-Chat-21M", "base_model:quantized:Abhishekcr448/Tiny-Hinglish-Chat-21M", "8-bit", "bitsandbytes", "region:us" ]
null
2024-11-26T14:08:20Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Abhishekcr448/Tiny-Hinglish-Chat-21M metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with llm-int8. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Abhishekcr448/Tiny-Hinglish-Chat-21M installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/Abhishekcr448-Tiny-Hinglish-Chat-21M-bnb-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Abhishekcr448/Tiny-Hinglish-Chat-21M") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Abhishekcr448/Tiny-Hinglish-Chat-21M before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
ThomET/funal_fine-tuned_llama3-8b-241106
ThomET
2024-11-26T14:04:09Z
161
0
transformers
[ "transformers", "safetensors", "llama", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-11-26T14:01:19Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF
KnutJaegersberg
2024-11-26T14:04:07Z
6
1
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "de", "bg", "cs", "da", "el", "en", "es", "et", "fi", "fr", "ga", "hr", "hu", "it", "lt", "lv", "mt", "nl", "pl", "pt", "ro", "sl", "sv", "sk", "base_model:openGPT-X/Teuken-7B-instruct-commercial-v0.4", "base_model:quantized:openGPT-X/Teuken-7B-instruct-commercial-v0.4", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T14:03:16Z
--- language: - de - bg - cs - da - el - en - es - et - fi - fr - ga - hr - hu - it - lt - lv - mt - nl - pl - pt - ro - sl - sv - sk metrics: - accuracy - bleu pipeline_tag: text-generation library_name: transformers base_model: openGPT-X/Teuken-7B-instruct-commercial-v0.4 license: apache-2.0 tags: - llama-cpp - gguf-my-repo --- # KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF This model was converted to GGUF format from [`openGPT-X/Teuken-7B-instruct-commercial-v0.4`](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/openGPT-X/Teuken-7B-instruct-commercial-v0.4) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF --hf-file teuken-7b-instruct-commercial-v0.4-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF --hf-file teuken-7b-instruct-commercial-v0.4-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF --hf-file teuken-7b-instruct-commercial-v0.4-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo KnutJaegersberg/Teuken-7B-instruct-commercial-v0.4-Q8_0-GGUF --hf-file teuken-7b-instruct-commercial-v0.4-q8_0.gguf -c 2048 ```
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k4_task2_organization_fold0
MayBashendy
2024-11-26T14:00:57Z
165
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T13:57:49Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k4_task2_organization_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k4_task2_organization_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8829 - Qwk: 0.4460 - Mse: 0.8829 - Rmse: 0.9396 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.1429 | 2 | 3.5901 | 0.0019 | 3.5901 | 1.8948 | | No log | 0.2857 | 4 | 1.8554 | 0.0195 | 1.8554 | 1.3621 | | No log | 0.4286 | 6 | 1.1036 | 0.1525 | 1.1036 | 1.0505 | | No log | 0.5714 | 8 | 0.8985 | 0.0290 | 0.8985 | 0.9479 | | No log | 0.7143 | 10 | 0.7913 | 0.0983 | 0.7913 | 0.8895 | | No log | 0.8571 | 12 | 0.8742 | -0.0047 | 0.8742 | 0.9350 | | No log | 1.0 | 14 | 1.2218 | 0.0 | 1.2218 | 1.1053 | | No log | 1.1429 | 16 | 1.6738 | -0.0305 | 1.6738 | 1.2937 | | No log | 1.2857 | 18 | 1.6681 | -0.0305 | 1.6681 | 1.2916 | | No log | 1.4286 | 20 | 1.3812 | 0.0 | 1.3812 | 1.1753 | | No log | 1.5714 | 22 | 1.0854 | 0.0 | 1.0854 | 1.0418 | | No log | 1.7143 | 24 | 0.9134 | -0.0486 | 0.9134 | 0.9557 | | No log | 1.8571 | 26 | 0.7955 | 0.1418 | 0.7955 | 0.8919 | | No log | 2.0 | 28 | 0.8573 | 0.0600 | 0.8573 | 0.9259 | | No log | 2.1429 | 30 | 1.0461 | -0.0880 | 1.0461 | 1.0228 | | No log | 2.2857 | 32 | 1.3789 | -0.1111 | 1.3789 | 1.1743 | | No log | 2.4286 | 34 | 1.4194 | -0.1650 | 1.4194 | 1.1914 | | No log | 2.5714 | 36 | 1.2233 | 0.0127 | 1.2233 | 1.1060 | | No log | 2.7143 | 38 | 0.9529 | 0.0630 | 0.9529 | 0.9762 | | No log | 2.8571 | 40 | 0.8676 | 0.1414 | 0.8676 | 0.9314 | | No log | 3.0 | 42 | 0.8769 | 0.2528 | 0.8769 | 0.9365 | | No log | 3.1429 | 44 | 0.8073 | 0.2528 | 0.8073 | 0.8985 | | No log | 3.2857 | 46 | 0.7249 | 0.2720 | 0.7249 | 0.8514 | | No log | 3.4286 | 48 | 0.6900 | 0.1978 | 0.6900 | 0.8307 | | No log | 3.5714 | 50 | 0.8178 | 0.2597 | 0.8178 | 0.9043 | | No log | 3.7143 | 52 | 0.8772 | 0.2250 | 0.8772 | 0.9366 | | No log | 3.8571 | 54 | 0.8604 | 0.2239 | 0.8604 | 0.9276 | | No log | 4.0 | 56 | 0.7539 | 0.2492 | 0.7539 | 0.8683 | | No log | 4.1429 | 58 | 0.7152 | 0.2888 | 0.7152 | 0.8457 | | No log | 4.2857 | 60 | 0.7440 | 0.2914 | 0.7440 | 0.8626 | | No log | 4.4286 | 62 | 0.7544 | 0.2754 | 0.7544 | 0.8686 | | No log | 4.5714 | 64 | 0.7519 | 0.3499 | 0.7519 | 0.8671 | | No log | 4.7143 | 66 | 0.7600 | 0.3499 | 0.7600 | 0.8718 | | No log | 4.8571 | 68 | 0.7671 | 0.1893 | 0.7671 | 0.8758 | | No log | 5.0 | 70 | 0.8229 | 0.1969 | 0.8229 | 0.9071 | | No log | 5.1429 | 72 | 0.8291 | 0.1969 | 0.8291 | 0.9106 | | No log | 5.2857 | 74 | 0.7957 | 0.2210 | 0.7957 | 0.8920 | | No log | 5.4286 | 76 | 0.8152 | 0.3854 | 0.8152 | 0.9029 | | No log | 5.5714 | 78 | 0.8653 | 0.4221 | 0.8653 | 0.9302 | | No log | 5.7143 | 80 | 0.9118 | 0.4337 | 0.9118 | 0.9549 | | No log | 5.8571 | 82 | 0.9070 | 0.4337 | 0.9070 | 0.9524 | | No log | 6.0 | 84 | 0.8759 | 0.3792 | 0.8759 | 0.9359 | | No log | 6.1429 | 86 | 0.8460 | 0.3571 | 0.8460 | 0.9198 | | No log | 6.2857 | 88 | 0.8496 | 0.3542 | 0.8496 | 0.9217 | | No log | 6.4286 | 90 | 0.8534 | 0.3661 | 0.8534 | 0.9238 | | No log | 6.5714 | 92 | 0.8638 | 0.3485 | 0.8638 | 0.9294 | | No log | 6.7143 | 94 | 0.8910 | 0.4349 | 0.8910 | 0.9439 | | No log | 6.8571 | 96 | 0.9351 | 0.3952 | 0.9351 | 0.9670 | | No log | 7.0 | 98 | 0.9467 | 0.3847 | 0.9467 | 0.9730 | | No log | 7.1429 | 100 | 0.9273 | 0.4175 | 0.9273 | 0.9630 | | No log | 7.2857 | 102 | 0.8638 | 0.4343 | 0.8638 | 0.9294 | | No log | 7.4286 | 104 | 0.8185 | 0.4235 | 0.8185 | 0.9047 | | No log | 7.5714 | 106 | 0.8020 | 0.4461 | 0.8020 | 0.8955 | | No log | 7.7143 | 108 | 0.8008 | 0.4351 | 0.8008 | 0.8949 | | No log | 7.8571 | 110 | 0.8086 | 0.4454 | 0.8086 | 0.8992 | | No log | 8.0 | 112 | 0.8046 | 0.4461 | 0.8046 | 0.8970 | | No log | 8.1429 | 114 | 0.8077 | 0.4454 | 0.8077 | 0.8987 | | No log | 8.2857 | 116 | 0.8085 | 0.4461 | 0.8085 | 0.8991 | | No log | 8.4286 | 118 | 0.8184 | 0.4020 | 0.8184 | 0.9047 | | No log | 8.5714 | 120 | 0.8408 | 0.4235 | 0.8408 | 0.9170 | | No log | 8.7143 | 122 | 0.8588 | 0.4571 | 0.8588 | 0.9267 | | No log | 8.8571 | 124 | 0.8713 | 0.4460 | 0.8713 | 0.9334 | | No log | 9.0 | 126 | 0.8858 | 0.4348 | 0.8858 | 0.9412 | | No log | 9.1429 | 128 | 0.8891 | 0.4348 | 0.8891 | 0.9429 | | No log | 9.2857 | 130 | 0.8929 | 0.4359 | 0.8929 | 0.9449 | | No log | 9.4286 | 132 | 0.8886 | 0.4359 | 0.8886 | 0.9426 | | No log | 9.5714 | 134 | 0.8848 | 0.4348 | 0.8848 | 0.9407 | | No log | 9.7143 | 136 | 0.8844 | 0.4348 | 0.8844 | 0.9404 | | No log | 9.8571 | 138 | 0.8838 | 0.4348 | 0.8838 | 0.9401 | | No log | 10.0 | 140 | 0.8829 | 0.4460 | 0.8829 | 0.9396 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
taronklm/Qwen2.5-0.5B-Instruct-lora-chatbot
taronklm
2024-11-26T14:00:43Z
8
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:Qwen/Qwen2.5-0.5B-Instruct", "base_model:adapter:Qwen/Qwen2.5-0.5B-Instruct", "license:apache-2.0", "region:us" ]
null
2024-11-18T15:40:15Z
--- base_model: Qwen/Qwen2.5-0.5B-Instruct library_name: peft license: apache-2.0 tags: - generated_from_trainer model-index: - name: Qwen2.5-0.5B-Instruct-lora-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-0.5B-Instruct-lora-chatbot This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.13.0 - Transformers 4.45.1 - Pytorch 2.5.1+cpu - Datasets 3.0.1 - Tokenizers 0.20.0
alpha-mark/stranger_1
alpha-mark
2024-11-26T13:56:27Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T13:52:40Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/OpenHermes-Mixtral-8x7B-GGUF
mradermacher
2024-11-26T13:54:09Z
52
0
transformers
[ "transformers", "gguf", "mixtral", "instruct", "finetune", "llama", "gpt4", "synthetic data", "distillation", "moe", "en", "base_model:orangetin/OpenHermes-Mixtral-8x7B", "base_model:quantized:orangetin/OpenHermes-Mixtral-8x7B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T00:24:31Z
--- base_model: orangetin/OpenHermes-Mixtral-8x7B language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - mixtral - instruct - finetune - llama - gpt4 - synthetic data - distillation - moe --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/orangetin/OpenHermes-Mixtral-8x7B <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q2_K.gguf) | Q2_K | 17.4 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q3_K_S.gguf) | Q3_K_S | 20.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q3_K_M.gguf) | Q3_K_M | 22.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q3_K_L.gguf) | Q3_K_L | 24.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.IQ4_XS.gguf) | IQ4_XS | 25.5 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q4_K_S.gguf) | Q4_K_S | 26.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q4_K_M.gguf) | Q4_K_M | 28.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q5_K_S.gguf) | Q5_K_S | 32.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q5_K_M.gguf) | Q5_K_M | 33.3 | | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q6_K.gguf) | Q6_K | 38.5 | very good quality | | [GGUF](https://huggingface.co/mradermacher/OpenHermes-Mixtral-8x7B-GGUF/resolve/main/OpenHermes-Mixtral-8x7B.Q8_0.gguf) | Q8_0 | 49.7 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/Cotype-Nano-i1-GGUF
mradermacher
2024-11-26T13:52:58Z
11
0
transformers
[ "transformers", "gguf", "ru", "en", "base_model:MTSAIR/Cotype-Nano", "base_model:quantized:MTSAIR/Cotype-Nano", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-26T13:24:30Z
--- base_model: MTSAIR/Cotype-Nano language: - ru - en library_name: transformers license: other license_link: https://huggingface.co/MTSAIR/Cotype-Nano/blob/main/Apache%20License%20MTS%20AI.docx license_name: apache-2.0 quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/MTSAIR/Cotype-Nano <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Cotype-Nano-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 1.0 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 1.0 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 1.0 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | | | [GGUF](https://huggingface.co/mradermacher/Cotype-Nano-i1-GGUF/resolve/main/Cotype-Nano.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/magnum-v4-12b-GGUF
mradermacher
2024-11-26T13:51:32Z
15,029
4
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system", "dataset:anthracite-org/kalo-opus-instruct-3k-filtered-no-system", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:anthracite-org/kalo_opus_misc_240827_no_system", "dataset:anthracite-org/kalo_misc_part2_no_system", "base_model:anthracite-org/magnum-v4-12b", "base_model:quantized:anthracite-org/magnum-v4-12b", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T08:06:01Z
--- base_model: anthracite-org/magnum-v4-12b datasets: - anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system - anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system - anthracite-org/kalo-opus-instruct-3k-filtered-no-system - anthracite-org/nopm_claude_writing_fixed - anthracite-org/kalo_opus_misc_240827_no_system - anthracite-org/kalo_misc_part2_no_system language: - en library_name: transformers license: apache-2.0 license_name: mrl quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anthracite-org/magnum-v4-12b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v4-12b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-12b-GGUF/resolve/main/magnum-v4-12b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/magnum-v4-22b-i1-GGUF
mradermacher
2024-11-26T13:51:28Z
16,853
3
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system", "dataset:anthracite-org/kalo-opus-instruct-3k-filtered-no-system", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:anthracite-org/kalo_opus_misc_240827_no_system", "dataset:anthracite-org/kalo_misc_part2_no_system", "base_model:anthracite-org/magnum-v4-22b", "base_model:quantized:anthracite-org/magnum-v4-22b", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-20T15:04:30Z
--- base_model: anthracite-org/magnum-v4-22b datasets: - anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system - anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system - anthracite-org/kalo-opus-instruct-3k-filtered-no-system - anthracite-org/nopm_claude_writing_fixed - anthracite-org/kalo_opus_misc_240827_no_system - anthracite-org/kalo_misc_part2_no_system language: - en library_name: transformers license: other license_name: mrl quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v4-22b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/magnum-v4-22b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-22b-i1-GGUF/resolve/main/magnum-v4-22b.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
hinaltt/Llama-3-8B-Amharic-Video-QandA
hinaltt
2024-11-26T13:51:24Z
5
0
null
[ "safetensors", "llama", "code", "text-generation", "conversational", "am", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct", "region:us" ]
text-generation
2024-11-26T12:52:34Z
--- language: - am base_model: - meta-llama/Meta-Llama-3-8B-Instruct pipeline_tag: text-generation tags: - code ---
mradermacher/magnum-v4-9b-i1-GGUF
mradermacher
2024-11-26T13:51:20Z
254
3
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_16k_llama_v1.1", "dataset:NewEden/Claude-Instruct-5K", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-9b", "base_model:quantized:anthracite-org/magnum-v4-9b", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-20T15:15:43Z
--- base_model: anthracite-org/magnum-v4-9b datasets: - anthracite-org/c2_logs_16k_llama_v1.1 - NewEden/Claude-Instruct-5K - anthracite-org/kalo-opus-instruct-22k-no-refusal - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v4-9b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/magnum-v4-9b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF/resolve/main/magnum-v4-9b.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/SFT-LLAMA3-8B-Education-GGUF
mradermacher
2024-11-26T13:51:07Z
6
0
transformers
[ "transformers", "gguf", "en", "base_model:minhquy1624/SFT-LLAMA3-8B-Education", "base_model:quantized:minhquy1624/SFT-LLAMA3-8B-Education", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T13:35:35Z
--- base_model: minhquy1624/SFT-LLAMA3-8B-Education language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/minhquy1624/SFT-LLAMA3-8B-Education <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/SFT-LLAMA3-8B-Education-GGUF/resolve/main/SFT-LLAMA3-8B-Education.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/magnum-v4-27b-i1-GGUF
mradermacher
2024-11-26T13:51:00Z
340
2
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_16k_llama_v1.1", "dataset:NewEden/Claude-Instruct-5K", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-27b", "base_model:quantized:anthracite-org/magnum-v4-27b", "license:gemma", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-21T01:26:42Z
--- base_model: anthracite-org/magnum-v4-27b datasets: - anthracite-org/c2_logs_16k_llama_v1.1 - NewEden/Claude-Instruct-5K - anthracite-org/kalo-opus-instruct-22k-no-refusal - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v4-27b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/magnum-v4-27b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ1_S.gguf) | i1-IQ1_S | 6.2 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ1_M.gguf) | i1-IQ1_M | 6.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 8.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ2_S.gguf) | i1-IQ2_S | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ2_M.gguf) | i1-IQ2_M | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q2_K.gguf) | i1-Q2_K | 10.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 10.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 11.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ3_S.gguf) | i1-IQ3_S | 12.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 12.3 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ3_M.gguf) | i1-IQ3_M | 12.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 13.5 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 14.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q4_0.gguf) | i1-Q4_0 | 15.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 15.8 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF/resolve/main/magnum-v4-27b.i1-Q6_K.gguf) | i1-Q6_K | 22.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
PEGurevich/detr-finetuned-balloon-v2-flip_rot_bright_gamma-scheduled
PEGurevich
2024-11-26T13:50:23Z
191
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-11-26T13:50:07Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k1_task2_organization_fold1
MayBashendy
2024-11-26T13:46:05Z
165
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T13:44:34Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k1_task2_organization_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k1_task2_organization_fold1 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2144 - Qwk: 0.1467 - Mse: 1.2144 - Rmse: 1.1020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.4 | 2 | 3.7271 | -0.0605 | 3.7271 | 1.9306 | | No log | 0.8 | 4 | 1.5854 | 0.1468 | 1.5854 | 1.2591 | | No log | 1.2 | 6 | 0.7302 | 0.1002 | 0.7302 | 0.8545 | | No log | 1.6 | 8 | 0.6196 | 0.3564 | 0.6196 | 0.7872 | | No log | 2.0 | 10 | 0.8956 | -0.2529 | 0.8956 | 0.9463 | | No log | 2.4 | 12 | 1.0018 | -0.0601 | 1.0018 | 1.0009 | | No log | 2.8 | 14 | 0.8659 | -0.1060 | 0.8659 | 0.9305 | | No log | 3.2 | 16 | 0.8088 | -0.0357 | 0.8088 | 0.8993 | | No log | 3.6 | 18 | 0.7840 | -0.0038 | 0.7840 | 0.8855 | | No log | 4.0 | 20 | 0.7839 | 0.0025 | 0.7839 | 0.8854 | | No log | 4.4 | 22 | 0.8070 | -0.0650 | 0.8070 | 0.8983 | | No log | 4.8 | 24 | 0.8483 | -0.0264 | 0.8483 | 0.9210 | | No log | 5.2 | 26 | 0.8709 | -0.0264 | 0.8709 | 0.9332 | | No log | 5.6 | 28 | 0.8727 | -0.0650 | 0.8727 | 0.9342 | | No log | 6.0 | 30 | 0.8935 | -0.0533 | 0.8935 | 0.9452 | | No log | 6.4 | 32 | 0.9479 | -0.0108 | 0.9479 | 0.9736 | | No log | 6.8 | 34 | 1.0149 | -0.0264 | 1.0149 | 1.0074 | | No log | 7.2 | 36 | 1.0203 | -0.0264 | 1.0203 | 1.0101 | | No log | 7.6 | 38 | 1.0270 | 0.0582 | 1.0270 | 1.0134 | | No log | 8.0 | 40 | 1.0451 | 0.1273 | 1.0451 | 1.0223 | | No log | 8.4 | 42 | 1.0897 | 0.0985 | 1.0897 | 1.0439 | | No log | 8.8 | 44 | 1.1484 | 0.1253 | 1.1484 | 1.0717 | | No log | 9.2 | 46 | 1.1819 | 0.1467 | 1.1819 | 1.0871 | | No log | 9.6 | 48 | 1.2013 | 0.1467 | 1.2013 | 1.0960 | | No log | 10.0 | 50 | 1.2144 | 0.1467 | 1.2144 | 1.1020 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter2_sftsd2
RylanSchaeffer
2024-11-26T13:41:54Z
6
0
null
[ "safetensors", "gemma2", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2-27b", "base_model:finetune:google/gemma-2-27b", "license:gemma", "region:us" ]
null
2024-11-26T13:31:15Z
--- license: gemma base_model: google/gemma-2-27b tags: - trl - sft - generated_from_trainer model-index: - name: collapse_gemma-2-27b_hs2_replace_iter2_sftsd2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # collapse_gemma-2-27b_hs2_replace_iter2_sftsd2 This model is a fine-tuned version of [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.2781 - Num Input Tokens Seen: 3603012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 4 - eval_batch_size: 16 - seed: 2 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:-----------------:| | No log | 0 | 0 | 1.1282 | 0 | | 2.7288 | 0.0560 | 5 | 1.0428 | 210528 | | 2.2631 | 0.1121 | 10 | 1.0614 | 409500 | | 1.7035 | 0.1681 | 15 | 1.0857 | 615964 | | 1.0224 | 0.2242 | 20 | 1.1604 | 820452 | | 0.7612 | 0.2802 | 25 | 1.1895 | 1018848 | | 0.5972 | 0.3363 | 30 | 1.1930 | 1215560 | | 0.6178 | 0.3923 | 35 | 1.1725 | 1409568 | | 0.4143 | 0.4483 | 40 | 1.1575 | 1608480 | | 0.5734 | 0.5044 | 45 | 1.1651 | 1812632 | | 0.4687 | 0.5604 | 50 | 1.1621 | 2009960 | | 0.6309 | 0.6165 | 55 | 1.1799 | 2223316 | | 0.5393 | 0.6725 | 60 | 1.1957 | 2428084 | | 0.271 | 0.7285 | 65 | 1.2141 | 2637248 | | 0.4383 | 0.7846 | 70 | 1.2308 | 2837024 | | 0.2703 | 0.8406 | 75 | 1.2080 | 3032684 | | 0.3999 | 0.8967 | 80 | 1.2440 | 3240344 | | 0.3003 | 0.9527 | 85 | 1.2357 | 3443968 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
Deev124/unr-1-b
Deev124
2024-11-26T13:40:18Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T13:37:15Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-005
griffio
2024-11-26T13:38:24Z
217
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-large-patch16-224", "base_model:finetune:google/vit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-26T13:31:15Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-large-patch16-224 tags: - image-classification - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-005 results: - task: name: Image Classification type: image-classification dataset: name: dungeon-geo-morphs type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 1.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-005 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset. It achieves the following results on the evaluation set: - Loss: 0.0274 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.4638 | 4.4444 | 10 | 0.8922 | 0.7714 | | 0.6108 | 8.8889 | 20 | 0.3243 | 0.9347 | | 0.234 | 13.3333 | 30 | 0.1423 | 0.9735 | | 0.1045 | 17.7778 | 40 | 0.0655 | 0.9980 | | 0.0578 | 22.2222 | 50 | 0.0395 | 0.9939 | | 0.0332 | 26.6667 | 60 | 0.0274 | 1.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
xiuqhou/relation-detr-swin-large
xiuqhou
2024-11-26T13:37:14Z
36
0
transformers
[ "transformers", "safetensors", "relation_detr", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-11-26T12:59:09Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dung6903/results
dung6903
2024-11-26T13:35:27Z
25
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:dung6903/results", "base_model:finetune:dung6903/results", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-20T04:12:32Z
--- library_name: transformers license: mit base_model: dung6903/results tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [dung6903/results](https://huggingface.co/dung6903/results) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3588 - Accuracy: 0.5149 - F1: 0.5145 - Precision: 0.5180 - Recall: 0.5149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 142 | 2.3684 | 0.4612 | 0.4524 | 0.4814 | 0.4612 | | No log | 2.0 | 284 | 2.3173 | 0.5149 | 0.5117 | 0.5226 | 0.5149 | | No log | 3.0 | 426 | 2.3588 | 0.5149 | 0.5145 | 0.5180 | 0.5149 | | 0.2514 | 4.0 | 568 | 2.5204 | 0.4950 | 0.4963 | 0.5052 | 0.4950 | | 0.2514 | 5.0 | 710 | 2.8100 | 0.5050 | 0.5076 | 0.5255 | 0.5050 | | 0.2514 | 6.0 | 852 | 2.9408 | 0.5089 | 0.5056 | 0.5262 | 0.5089 | | 0.2514 | 7.0 | 994 | 3.0119 | 0.5129 | 0.5104 | 0.5245 | 0.5129 | | 0.1376 | 8.0 | 1136 | 3.0821 | 0.4652 | 0.4594 | 0.4763 | 0.4652 | | 0.1376 | 9.0 | 1278 | 3.2156 | 0.4911 | 0.4871 | 0.5028 | 0.4911 | | 0.1376 | 10.0 | 1420 | 3.2285 | 0.4970 | 0.4969 | 0.5024 | 0.4970 | | 0.0934 | 11.0 | 1562 | 3.1558 | 0.4970 | 0.4966 | 0.5030 | 0.4970 | | 0.0934 | 12.0 | 1704 | 3.1808 | 0.5089 | 0.5078 | 0.5184 | 0.5089 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
Kunger/Sakura-14B-Qwen2.5-v1.0
Kunger
2024-11-26T13:33:43Z
10
0
null
[ "safetensors", "qwen2", "text-generation", "conversational", "ja", "zh", "license:cc-by-nc-sa-4.0", "region:us" ]
text-generation
2024-11-26T12:01:16Z
--- license: cc-by-nc-sa-4.0 language: - ja - zh pipeline_tag: text-generation --- ## SakuraLLM去量化模型 ### 为什么要去量化? llama.cpp在某些设备上受支持情况不佳,推理速度受限,我们可能希望使用pytorch进行推理,于是使用transformers库对GGUF模型进行去量化操作。 ### 原始模型是啥 [https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF](SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF)仓库中的sakura-14b-qwen2.5-v1.0-q6k.gguf ### 我想自己去量化 Transformers现已支持GGUF模型去量化,使用```AutoModelForCausalLM.from_pretrained```加载GGUF模型即可 ### 好用吗? 使用Q6K模型去量化,模型精度远低于F16,对于推理产生的结果未进行测试。 ### 其他问题 去量化后发现tokenizer的词表发生变化,不知道是否会对使用产生影响,你可以使用QWEN2.5模型中的词表替换这部分数据。
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task1_organization_fold1
MayBashendy
2024-11-26T13:31:55Z
183
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T12:52:23Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task1_organization_fold1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k60_task1_organization_fold1 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6201 - Qwk: 0.2740 - Mse: 1.6201 - Rmse: 1.2728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0074 | 2 | 3.2633 | -0.0342 | 3.2633 | 1.8065 | | No log | 0.0147 | 4 | 2.0668 | 0.0119 | 2.0668 | 1.4376 | | No log | 0.0221 | 6 | 1.7397 | -0.1975 | 1.7397 | 1.3190 | | No log | 0.0294 | 8 | 1.8726 | -0.1605 | 1.8726 | 1.3684 | | No log | 0.0368 | 10 | 1.9532 | -0.1727 | 1.9532 | 1.3976 | | No log | 0.0441 | 12 | 2.0348 | -0.1774 | 2.0348 | 1.4265 | | No log | 0.0515 | 14 | 1.7156 | -0.0633 | 1.7156 | 1.3098 | | No log | 0.0588 | 16 | 1.7743 | -0.0939 | 1.7743 | 1.3320 | | No log | 0.0662 | 18 | 1.7465 | -0.0037 | 1.7465 | 1.3215 | | No log | 0.0735 | 20 | 1.5251 | 0.0328 | 1.5251 | 1.2349 | | No log | 0.0809 | 22 | 1.2574 | 0.0336 | 1.2574 | 1.1213 | | No log | 0.0882 | 24 | 1.1792 | 0.0336 | 1.1792 | 1.0859 | | No log | 0.0956 | 26 | 1.2111 | 0.0336 | 1.2111 | 1.1005 | | No log | 0.1029 | 28 | 1.3638 | 0.0196 | 1.3638 | 1.1678 | | No log | 0.1103 | 30 | 1.3139 | 0.0558 | 1.3139 | 1.1463 | | No log | 0.1176 | 32 | 1.3731 | 0.0196 | 1.3731 | 1.1718 | | No log | 0.125 | 34 | 1.4374 | 0.0739 | 1.4374 | 1.1989 | | No log | 0.1324 | 36 | 1.5605 | 0.0739 | 1.5605 | 1.2492 | | No log | 0.1397 | 38 | 1.4792 | 0.0888 | 1.4792 | 1.2162 | | No log | 0.1471 | 40 | 1.2073 | 0.0336 | 1.2073 | 1.0988 | | No log | 0.1544 | 42 | 1.1085 | 0.0894 | 1.1085 | 1.0528 | | No log | 0.1618 | 44 | 1.3092 | 0.0100 | 1.3092 | 1.1442 | | No log | 0.1691 | 46 | 1.2644 | 0.0100 | 1.2644 | 1.1245 | | No log | 0.1765 | 48 | 1.1756 | 0.0379 | 1.1756 | 1.0842 | | No log | 0.1838 | 50 | 1.1295 | 0.0379 | 1.1295 | 1.0628 | | No log | 0.1912 | 52 | 1.2227 | 0.0 | 1.2227 | 1.1058 | | No log | 0.1985 | 54 | 1.3312 | 0.0888 | 1.3312 | 1.1538 | | No log | 0.2059 | 56 | 1.3114 | 0.0888 | 1.3114 | 1.1452 | | No log | 0.2132 | 58 | 1.2261 | 0.0888 | 1.2261 | 1.1073 | | No log | 0.2206 | 60 | 1.1070 | 0.0379 | 1.1070 | 1.0521 | | No log | 0.2279 | 62 | 1.0817 | 0.1208 | 1.0817 | 1.0400 | | No log | 0.2353 | 64 | 1.2110 | 0.0628 | 1.2110 | 1.1005 | | No log | 0.2426 | 66 | 1.2520 | 0.0628 | 1.2520 | 1.1189 | | No log | 0.25 | 68 | 1.3803 | 0.0313 | 1.3803 | 1.1748 | | No log | 0.2574 | 70 | 1.3775 | 0.0095 | 1.3775 | 1.1737 | | No log | 0.2647 | 72 | 1.3664 | -0.0298 | 1.3664 | 1.1690 | | No log | 0.2721 | 74 | 1.4056 | -0.0766 | 1.4056 | 1.1856 | | No log | 0.2794 | 76 | 1.4541 | -0.0766 | 1.4541 | 1.2059 | | No log | 0.2868 | 78 | 1.3208 | 0.0080 | 1.3208 | 1.1493 | | No log | 0.2941 | 80 | 1.2172 | 0.0855 | 1.2172 | 1.1033 | | No log | 0.3015 | 82 | 1.3631 | 0.0206 | 1.3631 | 1.1675 | | No log | 0.3088 | 84 | 1.3893 | -0.0334 | 1.3893 | 1.1787 | | No log | 0.3162 | 86 | 1.2723 | 0.0206 | 1.2723 | 1.1280 | | No log | 0.3235 | 88 | 1.1152 | 0.2210 | 1.1152 | 1.0560 | | No log | 0.3309 | 90 | 1.0331 | 0.2249 | 1.0331 | 1.0164 | | No log | 0.3382 | 92 | 0.9811 | 0.3035 | 0.9811 | 0.9905 | | No log | 0.3456 | 94 | 0.9758 | 0.3878 | 0.9758 | 0.9878 | | No log | 0.3529 | 96 | 1.0529 | 0.2847 | 1.0529 | 1.0261 | | No log | 0.3603 | 98 | 1.2216 | 0.1310 | 1.2216 | 1.1053 | | No log | 0.3676 | 100 | 1.2406 | 0.1310 | 1.2406 | 1.1138 | | No log | 0.375 | 102 | 1.0799 | 0.2950 | 1.0799 | 1.0392 | | No log | 0.3824 | 104 | 0.9615 | 0.3571 | 0.9615 | 0.9806 | | No log | 0.3897 | 106 | 0.9423 | 0.3213 | 0.9423 | 0.9707 | | No log | 0.3971 | 108 | 0.9853 | 0.2263 | 0.9853 | 0.9926 | | No log | 0.4044 | 110 | 0.9941 | 0.2329 | 0.9941 | 0.9970 | | No log | 0.4118 | 112 | 1.0336 | 0.2520 | 1.0336 | 1.0166 | | No log | 0.4191 | 114 | 1.0738 | 0.1837 | 1.0738 | 1.0362 | | No log | 0.4265 | 116 | 1.0757 | 0.1709 | 1.0757 | 1.0372 | | No log | 0.4338 | 118 | 1.0540 | 0.2213 | 1.0540 | 1.0267 | | No log | 0.4412 | 120 | 1.2379 | 0.0821 | 1.2379 | 1.1126 | | No log | 0.4485 | 122 | 1.5216 | 0.0818 | 1.5216 | 1.2335 | | No log | 0.4559 | 124 | 1.4907 | 0.0739 | 1.4907 | 1.2210 | | No log | 0.4632 | 126 | 1.1662 | 0.0721 | 1.1662 | 1.0799 | | No log | 0.4706 | 128 | 1.0358 | 0.1848 | 1.0358 | 1.0177 | | No log | 0.4779 | 130 | 1.1762 | 0.1341 | 1.1762 | 1.0845 | | No log | 0.4853 | 132 | 1.1784 | 0.0730 | 1.1784 | 1.0855 | | No log | 0.4926 | 134 | 1.0695 | 0.2166 | 1.0695 | 1.0342 | | No log | 0.5 | 136 | 0.9749 | 0.3130 | 0.9749 | 0.9873 | | No log | 0.5074 | 138 | 0.9908 | 0.4499 | 0.9908 | 0.9954 | | No log | 0.5147 | 140 | 1.2428 | 0.1289 | 1.2428 | 1.1148 | | No log | 0.5221 | 142 | 1.4890 | 0.1281 | 1.4890 | 1.2202 | | No log | 0.5294 | 144 | 1.6315 | 0.1682 | 1.6315 | 1.2773 | | No log | 0.5368 | 146 | 1.5743 | 0.1102 | 1.5743 | 1.2547 | | No log | 0.5441 | 148 | 1.3862 | 0.0739 | 1.3862 | 1.1774 | | No log | 0.5515 | 150 | 1.2011 | 0.0379 | 1.2011 | 1.0959 | | No log | 0.5588 | 152 | 1.0749 | 0.0757 | 1.0749 | 1.0368 | | No log | 0.5662 | 154 | 1.0239 | 0.2520 | 1.0239 | 1.0119 | | No log | 0.5735 | 156 | 1.0217 | 0.3742 | 1.0217 | 1.0108 | | No log | 0.5809 | 158 | 1.0660 | 0.1890 | 1.0660 | 1.0325 | | No log | 0.5882 | 160 | 1.1310 | 0.1754 | 1.1310 | 1.0635 | | No log | 0.5956 | 162 | 1.1959 | 0.1412 | 1.1959 | 1.0936 | | No log | 0.6029 | 164 | 1.2442 | 0.1440 | 1.2442 | 1.1154 | | No log | 0.6103 | 166 | 1.3009 | 0.1644 | 1.3009 | 1.1406 | | No log | 0.6176 | 168 | 1.4104 | 0.2176 | 1.4104 | 1.1876 | | No log | 0.625 | 170 | 1.4477 | 0.2088 | 1.4477 | 1.2032 | | No log | 0.6324 | 172 | 1.4943 | 0.1912 | 1.4943 | 1.2224 | | No log | 0.6397 | 174 | 1.4597 | 0.1811 | 1.4597 | 1.2082 | | No log | 0.6471 | 176 | 1.4015 | 0.1731 | 1.4015 | 1.1838 | | No log | 0.6544 | 178 | 1.2286 | 0.1593 | 1.2286 | 1.1084 | | No log | 0.6618 | 180 | 1.1630 | 0.1699 | 1.1630 | 1.0784 | | No log | 0.6691 | 182 | 1.1645 | 0.1604 | 1.1645 | 1.0791 | | No log | 0.6765 | 184 | 1.2672 | 0.2007 | 1.2672 | 1.1257 | | No log | 0.6838 | 186 | 1.5729 | 0.2966 | 1.5729 | 1.2541 | | No log | 0.6912 | 188 | 1.7250 | 0.3069 | 1.7250 | 1.3134 | | No log | 0.6985 | 190 | 1.6290 | 0.2726 | 1.6290 | 1.2763 | | No log | 0.7059 | 192 | 1.3814 | 0.2900 | 1.3814 | 1.1753 | | No log | 0.7132 | 194 | 1.2710 | 0.2288 | 1.2710 | 1.1274 | | No log | 0.7206 | 196 | 1.2586 | 0.2288 | 1.2586 | 1.1219 | | No log | 0.7279 | 198 | 1.2345 | 0.1926 | 1.2345 | 1.1111 | | No log | 0.7353 | 200 | 1.2250 | 0.2200 | 1.2250 | 1.1068 | | No log | 0.7426 | 202 | 1.2992 | 0.2820 | 1.2992 | 1.1398 | | No log | 0.75 | 204 | 1.3251 | 0.3048 | 1.3251 | 1.1511 | | No log | 0.7574 | 206 | 1.2840 | 0.3429 | 1.2840 | 1.1331 | | No log | 0.7647 | 208 | 1.3777 | 0.3423 | 1.3777 | 1.1738 | | No log | 0.7721 | 210 | 1.3868 | 0.3656 | 1.3868 | 1.1776 | | No log | 0.7794 | 212 | 1.3242 | 0.3002 | 1.3242 | 1.1508 | | No log | 0.7868 | 214 | 1.1618 | 0.2016 | 1.1618 | 1.0778 | | No log | 0.7941 | 216 | 1.0996 | 0.2402 | 1.0996 | 1.0486 | | No log | 0.8015 | 218 | 1.0351 | 0.2119 | 1.0351 | 1.0174 | | No log | 0.8088 | 220 | 1.0058 | 0.2139 | 1.0058 | 1.0029 | | No log | 0.8162 | 222 | 1.0276 | 0.2042 | 1.0276 | 1.0137 | | No log | 0.8235 | 224 | 1.0591 | 0.1628 | 1.0591 | 1.0291 | | No log | 0.8309 | 226 | 1.0790 | 0.1532 | 1.0790 | 1.0387 | | No log | 0.8382 | 228 | 1.0788 | 0.1434 | 1.0788 | 1.0387 | | No log | 0.8456 | 230 | 1.0909 | 0.2287 | 1.0909 | 1.0445 | | No log | 0.8529 | 232 | 1.1561 | 0.2442 | 1.1561 | 1.0752 | | No log | 0.8603 | 234 | 1.3375 | 0.3172 | 1.3375 | 1.1565 | | No log | 0.8676 | 236 | 1.6069 | 0.3182 | 1.6069 | 1.2676 | | No log | 0.875 | 238 | 1.7126 | 0.3254 | 1.7126 | 1.3087 | | No log | 0.8824 | 240 | 1.7792 | 0.3353 | 1.7792 | 1.3339 | | No log | 0.8897 | 242 | 1.5837 | 0.3218 | 1.5837 | 1.2585 | | No log | 0.8971 | 244 | 1.1920 | 0.3736 | 1.1920 | 1.0918 | | No log | 0.9044 | 246 | 0.9100 | 0.2987 | 0.9100 | 0.9539 | | No log | 0.9118 | 248 | 0.9082 | 0.3663 | 0.9082 | 0.9530 | | No log | 0.9191 | 250 | 1.1271 | 0.3853 | 1.1271 | 1.0617 | | No log | 0.9265 | 252 | 1.4528 | 0.3699 | 1.4528 | 1.2053 | | No log | 0.9338 | 254 | 1.7064 | 0.3513 | 1.7064 | 1.3063 | | No log | 0.9412 | 256 | 1.7726 | 0.3448 | 1.7726 | 1.3314 | | No log | 0.9485 | 258 | 1.6517 | 0.3565 | 1.6517 | 1.2852 | | No log | 0.9559 | 260 | 1.3358 | 0.3290 | 1.3358 | 1.1558 | | No log | 0.9632 | 262 | 1.0037 | 0.2421 | 1.0037 | 1.0018 | | No log | 0.9706 | 264 | 0.8952 | 0.3566 | 0.8952 | 0.9462 | | No log | 0.9779 | 266 | 0.9299 | 0.3566 | 0.9299 | 0.9643 | | No log | 0.9853 | 268 | 1.0097 | 0.2023 | 1.0097 | 1.0049 | | No log | 0.9926 | 270 | 1.1229 | 0.1613 | 1.1229 | 1.0597 | | No log | 1.0 | 272 | 1.2120 | 0.3349 | 1.2120 | 1.1009 | | No log | 1.0074 | 274 | 1.1838 | 0.3251 | 1.1838 | 1.0880 | | No log | 1.0147 | 276 | 1.0870 | 0.3057 | 1.0870 | 1.0426 | | No log | 1.0221 | 278 | 1.0253 | 0.2899 | 1.0253 | 1.0126 | | No log | 1.0294 | 280 | 0.9563 | 0.2897 | 0.9563 | 0.9779 | | No log | 1.0368 | 282 | 1.0526 | 0.3848 | 1.0526 | 1.0260 | | No log | 1.0441 | 284 | 1.2661 | 0.3873 | 1.2661 | 1.1252 | | No log | 1.0515 | 286 | 1.4964 | 0.3545 | 1.4964 | 1.2233 | | No log | 1.0588 | 288 | 1.5950 | 0.3399 | 1.5950 | 1.2629 | | No log | 1.0662 | 290 | 1.3243 | 0.3642 | 1.3243 | 1.1508 | | No log | 1.0735 | 292 | 0.9634 | 0.2708 | 0.9634 | 0.9815 | | No log | 1.0809 | 294 | 0.8988 | 0.3228 | 0.8988 | 0.9480 | | No log | 1.0882 | 296 | 0.9379 | 0.2936 | 0.9379 | 0.9684 | | No log | 1.0956 | 298 | 1.1429 | 0.3002 | 1.1429 | 1.0691 | | No log | 1.1029 | 300 | 1.4219 | 0.3729 | 1.4219 | 1.1924 | | No log | 1.1103 | 302 | 1.4460 | 0.3729 | 1.4460 | 1.2025 | | No log | 1.1176 | 304 | 1.2348 | 0.3482 | 1.2348 | 1.1112 | | No log | 1.125 | 306 | 1.1170 | 0.2522 | 1.1170 | 1.0569 | | No log | 1.1324 | 308 | 1.1097 | 0.2522 | 1.1097 | 1.0534 | | No log | 1.1397 | 310 | 1.1703 | 0.3092 | 1.1703 | 1.0818 | | No log | 1.1471 | 312 | 1.1501 | 0.3516 | 1.1501 | 1.0724 | | No log | 1.1544 | 314 | 1.1898 | 0.3642 | 1.1898 | 1.0908 | | No log | 1.1618 | 316 | 1.1431 | 0.3308 | 1.1431 | 1.0691 | | No log | 1.1691 | 318 | 1.1074 | 0.2767 | 1.1074 | 1.0523 | | No log | 1.1765 | 320 | 1.0938 | 0.2522 | 1.0938 | 1.0458 | | No log | 1.1838 | 322 | 1.0801 | 0.2522 | 1.0801 | 1.0393 | | No log | 1.1912 | 324 | 1.0924 | 0.2312 | 1.0924 | 1.0452 | | No log | 1.1985 | 326 | 1.1607 | 0.2571 | 1.1607 | 1.0774 | | No log | 1.2059 | 328 | 1.2310 | 0.2571 | 1.2310 | 1.1095 | | No log | 1.2132 | 330 | 1.3094 | 0.2655 | 1.3094 | 1.1443 | | No log | 1.2206 | 332 | 1.4433 | 0.3399 | 1.4433 | 1.2014 | | No log | 1.2279 | 334 | 1.4984 | 0.3144 | 1.4984 | 1.2241 | | No log | 1.2353 | 336 | 1.4569 | 0.3399 | 1.4569 | 1.2070 | | No log | 1.2426 | 338 | 1.4938 | 0.3269 | 1.4938 | 1.2222 | | No log | 1.25 | 340 | 1.5477 | 0.3023 | 1.5477 | 1.2441 | | No log | 1.2574 | 342 | 1.5381 | 0.3023 | 1.5381 | 1.2402 | | No log | 1.2647 | 344 | 1.2931 | 0.3326 | 1.2931 | 1.1372 | | No log | 1.2721 | 346 | 1.1910 | 0.3148 | 1.1910 | 1.0913 | | No log | 1.2794 | 348 | 1.1974 | 0.3858 | 1.1974 | 1.0942 | | No log | 1.2868 | 350 | 1.3849 | 0.3218 | 1.3849 | 1.1768 | | No log | 1.2941 | 352 | 1.6697 | 0.3024 | 1.6697 | 1.2922 | | No log | 1.3015 | 354 | 2.0130 | 0.2105 | 2.0130 | 1.4188 | | No log | 1.3088 | 356 | 1.9176 | 0.2183 | 1.9176 | 1.3848 | | No log | 1.3162 | 358 | 1.5942 | 0.3218 | 1.5942 | 1.2626 | | No log | 1.3235 | 360 | 1.3281 | 0.3269 | 1.3281 | 1.1524 | | No log | 1.3309 | 362 | 1.1794 | 0.3059 | 1.1794 | 1.0860 | | No log | 1.3382 | 364 | 1.1378 | 0.2041 | 1.1378 | 1.0667 | | No log | 1.3456 | 366 | 1.1112 | 0.1926 | 1.1112 | 1.0541 | | No log | 1.3529 | 368 | 1.0834 | 0.1926 | 1.0834 | 1.0409 | | No log | 1.3603 | 370 | 1.1045 | 0.1926 | 1.1045 | 1.0509 | | No log | 1.3676 | 372 | 1.2234 | 0.3229 | 1.2234 | 1.1061 | | No log | 1.375 | 374 | 1.4945 | 0.3218 | 1.4945 | 1.2225 | | No log | 1.3824 | 376 | 1.6210 | 0.3172 | 1.6210 | 1.2732 | | No log | 1.3897 | 378 | 1.4695 | 0.3254 | 1.4695 | 1.2122 | | No log | 1.3971 | 380 | 1.2642 | 0.3687 | 1.2642 | 1.1244 | | No log | 1.4044 | 382 | 1.2068 | 0.3687 | 1.2068 | 1.0985 | | No log | 1.4118 | 384 | 1.1862 | 0.3482 | 1.1862 | 1.0891 | | No log | 1.4191 | 386 | 1.1786 | 0.3229 | 1.1786 | 1.0856 | | No log | 1.4265 | 388 | 1.2391 | 0.3269 | 1.2391 | 1.1132 | | No log | 1.4338 | 390 | 1.2271 | 0.3269 | 1.2271 | 1.1077 | | No log | 1.4412 | 392 | 1.1947 | 0.3482 | 1.1947 | 1.0930 | | No log | 1.4485 | 394 | 1.2296 | 0.3687 | 1.2296 | 1.1089 | | No log | 1.4559 | 396 | 1.3804 | 0.3340 | 1.3804 | 1.1749 | | No log | 1.4632 | 398 | 1.6048 | 0.3024 | 1.6048 | 1.2668 | | No log | 1.4706 | 400 | 1.7662 | 0.3024 | 1.7662 | 1.3290 | | No log | 1.4779 | 402 | 1.8089 | 0.2630 | 1.8089 | 1.3450 | | No log | 1.4853 | 404 | 1.5771 | 0.2986 | 1.5771 | 1.2558 | | No log | 1.4926 | 406 | 1.4493 | 0.3132 | 1.4493 | 1.2039 | | No log | 1.5 | 408 | 1.3144 | 0.3642 | 1.3144 | 1.1465 | | No log | 1.5074 | 410 | 1.1933 | 0.3911 | 1.1933 | 1.0924 | | No log | 1.5147 | 412 | 1.1324 | 0.3687 | 1.1324 | 1.0641 | | No log | 1.5221 | 414 | 1.0442 | 0.3835 | 1.0442 | 1.0218 | | No log | 1.5294 | 416 | 1.0451 | 0.3572 | 1.0451 | 1.0223 | | No log | 1.5368 | 418 | 1.0449 | 0.3748 | 1.0449 | 1.0222 | | No log | 1.5441 | 420 | 1.1342 | 0.3985 | 1.1342 | 1.0650 | | No log | 1.5515 | 422 | 1.3274 | 0.3515 | 1.3274 | 1.1521 | | No log | 1.5588 | 424 | 1.6029 | 0.3369 | 1.6029 | 1.2660 | | No log | 1.5662 | 426 | 1.5509 | 0.3338 | 1.5509 | 1.2453 | | No log | 1.5735 | 428 | 1.2756 | 0.3515 | 1.2756 | 1.1294 | | No log | 1.5809 | 430 | 1.1692 | 0.3759 | 1.1692 | 1.0813 | | No log | 1.5882 | 432 | 1.1364 | 0.3759 | 1.1364 | 1.0660 | | No log | 1.5956 | 434 | 1.1588 | 0.3759 | 1.1588 | 1.0765 | | No log | 1.6029 | 436 | 1.1209 | 0.3832 | 1.1209 | 1.0587 | | No log | 1.6103 | 438 | 1.1296 | 0.3832 | 1.1296 | 1.0628 | | No log | 1.6176 | 440 | 1.0364 | 0.4128 | 1.0364 | 1.0180 | | No log | 1.625 | 442 | 0.9494 | 0.4987 | 0.9494 | 0.9744 | | No log | 1.6324 | 444 | 0.9824 | 0.5158 | 0.9824 | 0.9912 | | No log | 1.6397 | 446 | 1.2343 | 0.4048 | 1.2343 | 1.1110 | | No log | 1.6471 | 448 | 1.7462 | 0.2513 | 1.7462 | 1.3214 | | No log | 1.6544 | 450 | 1.8950 | 0.2465 | 1.8950 | 1.3766 | | No log | 1.6618 | 452 | 1.7103 | 0.2653 | 1.7103 | 1.3078 | | No log | 1.6691 | 454 | 1.2956 | 0.3684 | 1.2956 | 1.1382 | | No log | 1.6765 | 456 | 1.1302 | 0.3727 | 1.1302 | 1.0631 | | No log | 1.6838 | 458 | 1.0704 | 0.3832 | 1.0704 | 1.0346 | | No log | 1.6912 | 460 | 1.1033 | 0.3895 | 1.1033 | 1.0504 | | No log | 1.6985 | 462 | 1.2670 | 0.3642 | 1.2670 | 1.1256 | | No log | 1.7059 | 464 | 1.5303 | 0.3100 | 1.5303 | 1.2370 | | No log | 1.7132 | 466 | 1.6171 | 0.2952 | 1.6171 | 1.2717 | | No log | 1.7206 | 468 | 1.6970 | 0.2989 | 1.6970 | 1.3027 | | No log | 1.7279 | 470 | 1.4750 | 0.3218 | 1.4750 | 1.2145 | | No log | 1.7353 | 472 | 1.1896 | 0.3382 | 1.1896 | 1.0907 | | No log | 1.7426 | 474 | 0.9819 | 0.3719 | 0.9819 | 0.9909 | | No log | 1.75 | 476 | 0.9143 | 0.4568 | 0.9143 | 0.9562 | | No log | 1.7574 | 478 | 0.9892 | 0.4083 | 0.9892 | 0.9946 | | No log | 1.7647 | 480 | 1.1726 | 0.3869 | 1.1726 | 1.0829 | | No log | 1.7721 | 482 | 1.2879 | 0.3656 | 1.2879 | 1.1348 | | No log | 1.7794 | 484 | 1.3599 | 0.3515 | 1.3599 | 1.1662 | | No log | 1.7868 | 486 | 1.2324 | 0.3742 | 1.2324 | 1.1101 | | No log | 1.7941 | 488 | 1.1564 | 0.3994 | 1.1564 | 1.0754 | | No log | 1.8015 | 490 | 1.3221 | 0.3515 | 1.3221 | 1.1498 | | No log | 1.8088 | 492 | 1.3979 | 0.3218 | 1.3979 | 1.1823 | | No log | 1.8162 | 494 | 1.4832 | 0.3218 | 1.4832 | 1.2179 | | No log | 1.8235 | 496 | 1.4421 | 0.3601 | 1.4421 | 1.2009 | | No log | 1.8309 | 498 | 1.3417 | 0.3601 | 1.3417 | 1.1583 | | 0.4721 | 1.8382 | 500 | 1.3252 | 0.3863 | 1.3252 | 1.1512 | | 0.4721 | 1.8456 | 502 | 1.3081 | 0.3474 | 1.3081 | 1.1437 | | 0.4721 | 1.8529 | 504 | 1.3403 | 0.3023 | 1.3403 | 1.1577 | | 0.4721 | 1.8603 | 506 | 1.2984 | 0.3367 | 1.2984 | 1.1395 | | 0.4721 | 1.8676 | 508 | 1.3018 | 0.3367 | 1.3018 | 1.1410 | | 0.4721 | 1.875 | 510 | 1.2874 | 0.3440 | 1.2874 | 1.1346 | | 0.4721 | 1.8824 | 512 | 1.2622 | 0.3474 | 1.2622 | 1.1235 | | 0.4721 | 1.8897 | 514 | 1.3164 | 0.3601 | 1.3164 | 1.1473 | | 0.4721 | 1.8971 | 516 | 1.3128 | 0.3601 | 1.3128 | 1.1458 | | 0.4721 | 1.9044 | 518 | 1.3915 | 0.3658 | 1.3915 | 1.1796 | | 0.4721 | 1.9118 | 520 | 1.6121 | 0.3272 | 1.6121 | 1.2697 | | 0.4721 | 1.9191 | 522 | 1.8421 | 0.2433 | 1.8421 | 1.3572 | | 0.4721 | 1.9265 | 524 | 1.9424 | 0.2426 | 1.9424 | 1.3937 | | 0.4721 | 1.9338 | 526 | 1.7711 | 0.2351 | 1.7711 | 1.3308 | | 0.4721 | 1.9412 | 528 | 1.4265 | 0.3672 | 1.4265 | 1.1944 | | 0.4721 | 1.9485 | 530 | 1.2093 | 0.3624 | 1.2093 | 1.0997 | | 0.4721 | 1.9559 | 532 | 1.1969 | 0.3367 | 1.1969 | 1.0940 | | 0.4721 | 1.9632 | 534 | 1.3047 | 0.3672 | 1.3047 | 1.1422 | | 0.4721 | 1.9706 | 536 | 1.4739 | 0.3407 | 1.4739 | 1.2140 | | 0.4721 | 1.9779 | 538 | 1.5744 | 0.3096 | 1.5744 | 1.2547 | | 0.4721 | 1.9853 | 540 | 1.5730 | 0.3096 | 1.5730 | 1.2542 | | 0.4721 | 1.9926 | 542 | 1.4016 | 0.3340 | 1.4016 | 1.1839 | | 0.4721 | 2.0 | 544 | 1.1557 | 0.3746 | 1.1557 | 1.0750 | | 0.4721 | 2.0074 | 546 | 1.0363 | 0.4281 | 1.0363 | 1.0180 | | 0.4721 | 2.0147 | 548 | 1.0313 | 0.4321 | 1.0313 | 1.0155 | | 0.4721 | 2.0221 | 550 | 1.1109 | 0.3895 | 1.1109 | 1.0540 | | 0.4721 | 2.0294 | 552 | 1.2923 | 0.3729 | 1.2923 | 1.1368 | | 0.4721 | 2.0368 | 554 | 1.4350 | 0.3340 | 1.4350 | 1.1979 | | 0.4721 | 2.0441 | 556 | 1.4658 | 0.3340 | 1.4658 | 1.2107 | | 0.4721 | 2.0515 | 558 | 1.4222 | 0.3340 | 1.4222 | 1.1926 | | 0.4721 | 2.0588 | 560 | 1.2971 | 0.3232 | 1.2971 | 1.1389 | | 0.4721 | 2.0662 | 562 | 1.1615 | 0.2094 | 1.1615 | 1.0777 | | 0.4721 | 2.0735 | 564 | 1.1209 | 0.1941 | 1.1209 | 1.0587 | | 0.4721 | 2.0809 | 566 | 1.1241 | 0.2242 | 1.1241 | 1.0603 | | 0.4721 | 2.0882 | 568 | 1.1874 | 0.3077 | 1.1874 | 1.0897 | | 0.4721 | 2.0956 | 570 | 1.3090 | 0.3190 | 1.3090 | 1.1441 | | 0.4721 | 2.1029 | 572 | 1.5053 | 0.3111 | 1.5053 | 1.2269 | | 0.4721 | 2.1103 | 574 | 1.7278 | 0.2888 | 1.7278 | 1.3145 | | 0.4721 | 2.1176 | 576 | 1.7770 | 0.2784 | 1.7770 | 1.3330 | | 0.4721 | 2.125 | 578 | 1.6584 | 0.2958 | 1.6584 | 1.2878 | | 0.4721 | 2.1324 | 580 | 1.4601 | 0.2993 | 1.4601 | 1.2084 | | 0.4721 | 2.1397 | 582 | 1.2456 | 0.3269 | 1.2456 | 1.1161 | | 0.4721 | 2.1471 | 584 | 1.1073 | 0.3310 | 1.1073 | 1.0523 | | 0.4721 | 2.1544 | 586 | 1.0296 | 0.3310 | 1.0296 | 1.0147 | | 0.4721 | 2.1618 | 588 | 1.0216 | 0.3813 | 1.0216 | 1.0108 | | 0.4721 | 2.1691 | 590 | 1.0886 | 0.3793 | 1.0886 | 1.0433 | | 0.4721 | 2.1765 | 592 | 1.2305 | 0.3714 | 1.2305 | 1.1093 | | 0.4721 | 2.1838 | 594 | 1.4045 | 0.3205 | 1.4045 | 1.1851 | | 0.4721 | 2.1912 | 596 | 1.5920 | 0.3130 | 1.5920 | 1.2618 | | 0.4721 | 2.1985 | 598 | 1.5614 | 0.3096 | 1.5614 | 1.2496 | | 0.4721 | 2.2059 | 600 | 1.3862 | 0.3205 | 1.3862 | 1.1774 | | 0.4721 | 2.2132 | 602 | 1.2032 | 0.3847 | 1.2032 | 1.0969 | | 0.4721 | 2.2206 | 604 | 1.1671 | 0.3656 | 1.1671 | 1.0803 | | 0.4721 | 2.2279 | 606 | 1.1837 | 0.3582 | 1.1837 | 1.0880 | | 0.4721 | 2.2353 | 608 | 1.2412 | 0.3863 | 1.2412 | 1.1141 | | 0.4721 | 2.2426 | 610 | 1.3478 | 0.3729 | 1.3478 | 1.1610 | | 0.4721 | 2.25 | 612 | 1.4002 | 0.3467 | 1.4002 | 1.1833 | | 0.4721 | 2.2574 | 614 | 1.4310 | 0.2823 | 1.4310 | 1.1963 | | 0.4721 | 2.2647 | 616 | 1.4953 | 0.2823 | 1.4953 | 1.2228 | | 0.4721 | 2.2721 | 618 | 1.5577 | 0.2704 | 1.5577 | 1.2481 | | 0.4721 | 2.2794 | 620 | 1.5849 | 0.2750 | 1.5849 | 1.2589 | | 0.4721 | 2.2868 | 622 | 1.6597 | 0.2837 | 1.6597 | 1.2883 | | 0.4721 | 2.2941 | 624 | 1.6296 | 0.2837 | 1.6296 | 1.2766 | | 0.4721 | 2.3015 | 626 | 1.5226 | 0.2556 | 1.5226 | 1.2340 | | 0.4721 | 2.3088 | 628 | 1.4135 | 0.1696 | 1.4135 | 1.1889 | | 0.4721 | 2.3162 | 630 | 1.3239 | 0.1458 | 1.3239 | 1.1506 | | 0.4721 | 2.3235 | 632 | 1.2978 | 0.0638 | 1.2978 | 1.1392 | | 0.4721 | 2.3309 | 634 | 1.2904 | 0.0638 | 1.2904 | 1.1360 | | 0.4721 | 2.3382 | 636 | 1.3947 | 0.1814 | 1.3947 | 1.1810 | | 0.4721 | 2.3456 | 638 | 1.4941 | 0.2607 | 1.4941 | 1.2223 | | 0.4721 | 2.3529 | 640 | 1.6230 | 0.3032 | 1.6230 | 1.2740 | | 0.4721 | 2.3603 | 642 | 1.6231 | 0.3069 | 1.6231 | 1.2740 | | 0.4721 | 2.3676 | 644 | 1.5951 | 0.3106 | 1.5951 | 1.2630 | | 0.4721 | 2.375 | 646 | 1.4368 | 0.3630 | 1.4368 | 1.1987 | | 0.4721 | 2.3824 | 648 | 1.3571 | 0.3913 | 1.3571 | 1.1649 | | 0.4721 | 2.3897 | 650 | 1.4225 | 0.3685 | 1.4225 | 1.1927 | | 0.4721 | 2.3971 | 652 | 1.6123 | 0.3289 | 1.6123 | 1.2697 | | 0.4721 | 2.4044 | 654 | 1.6336 | 0.3115 | 1.6336 | 1.2781 | | 0.4721 | 2.4118 | 656 | 1.6547 | 0.3115 | 1.6547 | 1.2863 | | 0.4721 | 2.4191 | 658 | 1.5493 | 0.3619 | 1.5493 | 1.2447 | | 0.4721 | 2.4265 | 660 | 1.5936 | 0.3645 | 1.5936 | 1.2624 | | 0.4721 | 2.4338 | 662 | 1.6564 | 0.3334 | 1.6564 | 1.2870 | | 0.4721 | 2.4412 | 664 | 1.5189 | 0.3619 | 1.5189 | 1.2324 | | 0.4721 | 2.4485 | 666 | 1.3214 | 0.3672 | 1.3214 | 1.1495 | | 0.4721 | 2.4559 | 668 | 1.2569 | 0.3746 | 1.2569 | 1.1211 | | 0.4721 | 2.4632 | 670 | 1.2912 | 0.3672 | 1.2912 | 1.1363 | | 0.4721 | 2.4706 | 672 | 1.3432 | 0.3701 | 1.3432 | 1.1590 | | 0.4721 | 2.4779 | 674 | 1.4037 | 0.3658 | 1.4037 | 1.1848 | | 0.4721 | 2.4853 | 676 | 1.5483 | 0.3448 | 1.5483 | 1.2443 | | 0.4721 | 2.4926 | 678 | 1.7491 | 0.2932 | 1.7491 | 1.3225 | | 0.4721 | 2.5 | 680 | 1.7748 | 0.2703 | 1.7748 | 1.3322 | | 0.4721 | 2.5074 | 682 | 1.6245 | 0.3130 | 1.6245 | 1.2746 | | 0.4721 | 2.5147 | 684 | 1.4812 | 0.3630 | 1.4812 | 1.2170 | | 0.4721 | 2.5221 | 686 | 1.3185 | 0.3571 | 1.3185 | 1.1483 | | 0.4721 | 2.5294 | 688 | 1.2047 | 0.3474 | 1.2047 | 1.0976 | | 0.4721 | 2.5368 | 690 | 1.1379 | 0.3550 | 1.1379 | 1.0667 | | 0.4721 | 2.5441 | 692 | 1.0886 | 0.3423 | 1.0886 | 1.0434 | | 0.4721 | 2.5515 | 694 | 1.0816 | 0.3656 | 1.0816 | 1.0400 | | 0.4721 | 2.5588 | 696 | 1.1027 | 0.3440 | 1.1027 | 1.0501 | | 0.4721 | 2.5662 | 698 | 1.1269 | 0.3023 | 1.1269 | 1.0616 | | 0.4721 | 2.5735 | 700 | 1.2501 | 0.3144 | 1.2501 | 1.1181 | | 0.4721 | 2.5809 | 702 | 1.3735 | 0.3182 | 1.3735 | 1.1719 | | 0.4721 | 2.5882 | 704 | 1.4053 | 0.3374 | 1.4053 | 1.1855 | | 0.4721 | 2.5956 | 706 | 1.4562 | 0.3254 | 1.4562 | 1.2067 | | 0.4721 | 2.6029 | 708 | 1.4101 | 0.3630 | 1.4101 | 1.1875 | | 0.4721 | 2.6103 | 710 | 1.3274 | 0.3922 | 1.3274 | 1.1521 | | 0.4721 | 2.6176 | 712 | 1.2671 | 0.3969 | 1.2671 | 1.1257 | | 0.4721 | 2.625 | 714 | 1.2907 | 0.3969 | 1.2907 | 1.1361 | | 0.4721 | 2.6324 | 716 | 1.4614 | 0.3418 | 1.4614 | 1.2089 | | 0.4721 | 2.6397 | 718 | 1.7646 | 0.2928 | 1.7646 | 1.3284 | | 0.4721 | 2.6471 | 720 | 1.8650 | 0.2961 | 1.8650 | 1.3657 | | 0.4721 | 2.6544 | 722 | 1.8346 | 0.2928 | 1.8346 | 1.3545 | | 0.4721 | 2.6618 | 724 | 1.6919 | 0.3024 | 1.6919 | 1.3007 | | 0.4721 | 2.6691 | 726 | 1.4589 | 0.3232 | 1.4589 | 1.2078 | | 0.4721 | 2.6765 | 728 | 1.2477 | 0.1926 | 1.2477 | 1.1170 | | 0.4721 | 2.6838 | 730 | 1.1717 | 0.1555 | 1.1717 | 1.0825 | | 0.4721 | 2.6912 | 732 | 1.1523 | 0.1555 | 1.1523 | 1.0734 | | 0.4721 | 2.6985 | 734 | 1.1791 | 0.1555 | 1.1791 | 1.0859 | | 0.4721 | 2.7059 | 736 | 1.2430 | 0.1549 | 1.2430 | 1.1149 | | 0.4721 | 2.7132 | 738 | 1.2539 | 0.1042 | 1.2539 | 1.1198 | | 0.4721 | 2.7206 | 740 | 1.2318 | 0.1042 | 1.2318 | 1.1099 | | 0.4721 | 2.7279 | 742 | 1.2092 | 0.1837 | 1.2092 | 1.0996 | | 0.4721 | 2.7353 | 744 | 1.2546 | 0.3308 | 1.2546 | 1.1201 | | 0.4721 | 2.7426 | 746 | 1.3540 | 0.3863 | 1.3540 | 1.1636 | | 0.4721 | 2.75 | 748 | 1.4413 | 0.3630 | 1.4413 | 1.2006 | | 0.4721 | 2.7574 | 750 | 1.4784 | 0.3478 | 1.4784 | 1.2159 | | 0.4721 | 2.7647 | 752 | 1.4918 | 0.3226 | 1.4918 | 1.2214 | | 0.4721 | 2.7721 | 754 | 1.5666 | 0.3289 | 1.5666 | 1.2516 | | 0.4721 | 2.7794 | 756 | 1.5863 | 0.3289 | 1.5863 | 1.2595 | | 0.4721 | 2.7868 | 758 | 1.5368 | 0.3349 | 1.5368 | 1.2397 | | 0.4721 | 2.7941 | 760 | 1.6003 | 0.3378 | 1.6003 | 1.2650 | | 0.4721 | 2.8015 | 762 | 1.5925 | 0.3378 | 1.5925 | 1.2619 | | 0.4721 | 2.8088 | 764 | 1.6110 | 0.3318 | 1.6110 | 1.2693 | | 0.4721 | 2.8162 | 766 | 1.6778 | 0.2776 | 1.6778 | 1.2953 | | 0.4721 | 2.8235 | 768 | 1.6932 | 0.2811 | 1.6932 | 1.3012 | | 0.4721 | 2.8309 | 770 | 1.6551 | 0.2798 | 1.6551 | 1.2865 | | 0.4721 | 2.8382 | 772 | 1.4949 | 0.3478 | 1.4949 | 1.2227 | | 0.4721 | 2.8456 | 774 | 1.4270 | 0.3657 | 1.4270 | 1.1946 | | 0.4721 | 2.8529 | 776 | 1.3890 | 0.3657 | 1.3890 | 1.1786 | | 0.4721 | 2.8603 | 778 | 1.2928 | 0.3867 | 1.2928 | 1.1370 | | 0.4721 | 2.8676 | 780 | 1.3425 | 0.3867 | 1.3425 | 1.1587 | | 0.4721 | 2.875 | 782 | 1.4477 | 0.3569 | 1.4477 | 1.2032 | | 0.4721 | 2.8824 | 784 | 1.5974 | 0.3512 | 1.5974 | 1.2639 | | 0.4721 | 2.8897 | 786 | 1.6097 | 0.3318 | 1.6097 | 1.2688 | | 0.4721 | 2.8971 | 788 | 1.5936 | 0.3258 | 1.5936 | 1.2624 | | 0.4721 | 2.9044 | 790 | 1.4733 | 0.3569 | 1.4733 | 1.2138 | | 0.4721 | 2.9118 | 792 | 1.3286 | 0.3867 | 1.3286 | 1.1526 | | 0.4721 | 2.9191 | 794 | 1.1980 | 0.4469 | 1.1980 | 1.0945 | | 0.4721 | 2.9265 | 796 | 1.3099 | 0.3867 | 1.3099 | 1.1445 | | 0.4721 | 2.9338 | 798 | 1.4565 | 0.3682 | 1.4565 | 1.2068 | | 0.4721 | 2.9412 | 800 | 1.7647 | 0.2901 | 1.7647 | 1.3284 | | 0.4721 | 2.9485 | 802 | 2.0111 | 0.2491 | 2.0111 | 1.4181 | | 0.4721 | 2.9559 | 804 | 2.0195 | 0.2491 | 2.0195 | 1.4211 | | 0.4721 | 2.9632 | 806 | 1.8605 | 0.2668 | 1.8605 | 1.3640 | | 0.4721 | 2.9706 | 808 | 1.6012 | 0.2932 | 1.6012 | 1.2654 | | 0.4721 | 2.9779 | 810 | 1.4392 | 0.3499 | 1.4392 | 1.1997 | | 0.4721 | 2.9853 | 812 | 1.4537 | 0.3384 | 1.4537 | 1.2057 | | 0.4721 | 2.9926 | 814 | 1.5391 | 0.3135 | 1.5391 | 1.2406 | | 0.4721 | 3.0 | 816 | 1.5190 | 0.3101 | 1.5190 | 1.2325 | | 0.4721 | 3.0074 | 818 | 1.4948 | 0.3101 | 1.4948 | 1.2226 | | 0.4721 | 3.0147 | 820 | 1.4123 | 0.3353 | 1.4123 | 1.1884 | | 0.4721 | 3.0221 | 822 | 1.3734 | 0.3685 | 1.3734 | 1.1719 | | 0.4721 | 3.0294 | 824 | 1.3003 | 0.4006 | 1.3003 | 1.1403 | | 0.4721 | 3.0368 | 826 | 1.2423 | 0.3984 | 1.2423 | 1.1146 | | 0.4721 | 3.0441 | 828 | 1.3057 | 0.4006 | 1.3057 | 1.1427 | | 0.4721 | 3.0515 | 830 | 1.4662 | 0.3736 | 1.4662 | 1.2109 | | 0.4721 | 3.0588 | 832 | 1.5623 | 0.3646 | 1.5623 | 1.2499 | | 0.4721 | 3.0662 | 834 | 1.5484 | 0.3492 | 1.5484 | 1.2443 | | 0.4721 | 3.0735 | 836 | 1.5370 | 0.3492 | 1.5370 | 1.2398 | | 0.4721 | 3.0809 | 838 | 1.6333 | 0.3317 | 1.6333 | 1.2780 | | 0.4721 | 3.0882 | 840 | 1.6355 | 0.3317 | 1.6355 | 1.2789 | | 0.4721 | 3.0956 | 842 | 1.5835 | 0.3406 | 1.5835 | 1.2584 | | 0.4721 | 3.1029 | 844 | 1.4063 | 0.3802 | 1.4063 | 1.1859 | | 0.4721 | 3.1103 | 846 | 1.2724 | 0.4211 | 1.2724 | 1.1280 | | 0.4721 | 3.1176 | 848 | 1.2633 | 0.4211 | 1.2633 | 1.1240 | | 0.4721 | 3.125 | 850 | 1.3375 | 0.3752 | 1.3375 | 1.1565 | | 0.4721 | 3.1324 | 852 | 1.5047 | 0.3555 | 1.5047 | 1.2267 | | 0.4721 | 3.1397 | 854 | 1.5044 | 0.3443 | 1.5044 | 1.2266 | | 0.4721 | 3.1471 | 856 | 1.4001 | 0.3685 | 1.4001 | 1.1832 | | 0.4721 | 3.1544 | 858 | 1.2910 | 0.3984 | 1.2910 | 1.1362 | | 0.4721 | 3.1618 | 860 | 1.2753 | 0.3801 | 1.2753 | 1.1293 | | 0.4721 | 3.1691 | 862 | 1.3305 | 0.3913 | 1.3305 | 1.1535 | | 0.4721 | 3.1765 | 864 | 1.4912 | 0.3384 | 1.4912 | 1.2211 | | 0.4721 | 3.1838 | 866 | 1.6492 | 0.3228 | 1.6492 | 1.2842 | | 0.4721 | 3.1912 | 868 | 1.6959 | 0.2932 | 1.6959 | 1.3022 | | 0.4721 | 3.1985 | 870 | 1.6043 | 0.3334 | 1.6043 | 1.2666 | | 0.4721 | 3.2059 | 872 | 1.5785 | 0.3414 | 1.5785 | 1.2564 | | 0.4721 | 3.2132 | 874 | 1.5583 | 0.3414 | 1.5583 | 1.2483 | | 0.4721 | 3.2206 | 876 | 1.4230 | 0.3809 | 1.4230 | 1.1929 | | 0.4721 | 3.2279 | 878 | 1.3020 | 0.3913 | 1.3020 | 1.1411 | | 0.4721 | 3.2353 | 880 | 1.3585 | 0.3783 | 1.3585 | 1.1655 | | 0.4721 | 3.2426 | 882 | 1.4911 | 0.3555 | 1.4911 | 1.2211 | | 0.4721 | 3.25 | 884 | 1.6480 | 0.2798 | 1.6480 | 1.2838 | | 0.4721 | 3.2574 | 886 | 1.7023 | 0.2610 | 1.7023 | 1.3047 | | 0.4721 | 3.2647 | 888 | 1.5951 | 0.3228 | 1.5951 | 1.2630 | | 0.4721 | 3.2721 | 890 | 1.4559 | 0.3443 | 1.4559 | 1.2066 | | 0.4721 | 3.2794 | 892 | 1.3887 | 0.3857 | 1.3887 | 1.1784 | | 0.4721 | 3.2868 | 894 | 1.3626 | 0.3685 | 1.3626 | 1.1673 | | 0.4721 | 3.2941 | 896 | 1.4950 | 0.3443 | 1.4950 | 1.2227 | | 0.4721 | 3.3015 | 898 | 1.6353 | 0.2867 | 1.6353 | 1.2788 | | 0.4721 | 3.3088 | 900 | 1.7808 | 0.2995 | 1.7808 | 1.3345 | | 0.4721 | 3.3162 | 902 | 1.8680 | 0.2995 | 1.8680 | 1.3667 | | 0.4721 | 3.3235 | 904 | 1.9381 | 0.2705 | 1.9381 | 1.3922 | | 0.4721 | 3.3309 | 906 | 1.9563 | 0.2571 | 1.9563 | 1.3987 | | 0.4721 | 3.3382 | 908 | 1.8615 | 0.2907 | 1.8615 | 1.3644 | | 0.4721 | 3.3456 | 910 | 1.6682 | 0.2867 | 1.6682 | 1.2916 | | 0.4721 | 3.3529 | 912 | 1.5524 | 0.3258 | 1.5524 | 1.2460 | | 0.4721 | 3.3603 | 914 | 1.5520 | 0.3363 | 1.5520 | 1.2458 | | 0.4721 | 3.3676 | 916 | 1.5403 | 0.3303 | 1.5403 | 1.2411 | | 0.4721 | 3.375 | 918 | 1.4974 | 0.3384 | 1.4974 | 1.2237 | | 0.4721 | 3.3824 | 920 | 1.5003 | 0.3384 | 1.5003 | 1.2249 | | 0.4721 | 3.3897 | 922 | 1.4681 | 0.3469 | 1.4681 | 1.2117 | | 0.4721 | 3.3971 | 924 | 1.4731 | 0.3499 | 1.4731 | 1.2137 | | 0.4721 | 3.4044 | 926 | 1.5209 | 0.3592 | 1.5209 | 1.2333 | | 0.4721 | 3.4118 | 928 | 1.4890 | 0.3478 | 1.4890 | 1.2203 | | 0.4721 | 3.4191 | 930 | 1.5115 | 0.3396 | 1.5115 | 1.2294 | | 0.4721 | 3.4265 | 932 | 1.5625 | 0.3050 | 1.5625 | 1.2500 | | 0.4721 | 3.4338 | 934 | 1.6941 | 0.3140 | 1.6941 | 1.3016 | | 0.4721 | 3.4412 | 936 | 1.8974 | 0.2934 | 1.8974 | 1.3775 | | 0.4721 | 3.4485 | 938 | 1.9169 | 0.2720 | 1.9169 | 1.3845 | | 0.4721 | 3.4559 | 940 | 1.7142 | 0.3019 | 1.7142 | 1.3093 | | 0.4721 | 3.4632 | 942 | 1.4809 | 0.3592 | 1.4809 | 1.2169 | | 0.4721 | 3.4706 | 944 | 1.3318 | 0.3888 | 1.3318 | 1.1540 | | 0.4721 | 3.4779 | 946 | 1.2008 | 0.3482 | 1.2008 | 1.0958 | | 0.4721 | 3.4853 | 948 | 1.1425 | 0.3590 | 1.1425 | 1.0689 | | 0.4721 | 3.4926 | 950 | 1.1664 | 0.3482 | 1.1664 | 1.0800 | | 0.4721 | 3.5 | 952 | 1.2021 | 0.3474 | 1.2021 | 1.0964 | | 0.4721 | 3.5074 | 954 | 1.2741 | 0.3888 | 1.2741 | 1.1288 | | 0.4721 | 3.5147 | 956 | 1.3171 | 0.3783 | 1.3171 | 1.1476 | | 0.4721 | 3.5221 | 958 | 1.3958 | 0.3565 | 1.3958 | 1.1814 | | 0.4721 | 3.5294 | 960 | 1.4225 | 0.3592 | 1.4225 | 1.1927 | | 0.4721 | 3.5368 | 962 | 1.4275 | 0.3761 | 1.4275 | 1.1948 | | 0.4721 | 3.5441 | 964 | 1.3233 | 0.3783 | 1.3233 | 1.1504 | | 0.4721 | 3.5515 | 966 | 1.3187 | 0.3783 | 1.3187 | 1.1483 | | 0.4721 | 3.5588 | 968 | 1.3733 | 0.3809 | 1.3733 | 1.1719 | | 0.4721 | 3.5662 | 970 | 1.4351 | 0.3565 | 1.4351 | 1.1980 | | 0.4721 | 3.5735 | 972 | 1.5896 | 0.3443 | 1.5896 | 1.2608 | | 0.4721 | 3.5809 | 974 | 1.6748 | 0.2835 | 1.6748 | 1.2942 | | 0.4721 | 3.5882 | 976 | 1.6616 | 0.3392 | 1.6616 | 1.2890 | | 0.4721 | 3.5956 | 978 | 1.4932 | 0.3384 | 1.4932 | 1.2220 | | 0.4721 | 3.6029 | 980 | 1.3259 | 0.3783 | 1.3259 | 1.1515 | | 0.4721 | 3.6103 | 982 | 1.3531 | 0.3783 | 1.3531 | 1.1632 | | 0.4721 | 3.6176 | 984 | 1.5138 | 0.3384 | 1.5138 | 1.2304 | | 0.4721 | 3.625 | 986 | 1.6145 | 0.3241 | 1.6145 | 1.2706 | | 0.4721 | 3.6324 | 988 | 1.6007 | 0.3609 | 1.6007 | 1.2652 | | 0.4721 | 3.6397 | 990 | 1.4858 | 0.3384 | 1.4858 | 1.2190 | | 0.4721 | 3.6471 | 992 | 1.4456 | 0.3384 | 1.4456 | 1.2023 | | 0.4721 | 3.6544 | 994 | 1.4749 | 0.3384 | 1.4749 | 1.2145 | | 0.4721 | 3.6618 | 996 | 1.4431 | 0.3384 | 1.4431 | 1.2013 | | 0.4721 | 3.6691 | 998 | 1.4506 | 0.3384 | 1.4506 | 1.2044 | | 0.106 | 3.6765 | 1000 | 1.4291 | 0.3499 | 1.4291 | 1.1954 | | 0.106 | 3.6838 | 1002 | 1.5614 | 0.3555 | 1.5614 | 1.2496 | | 0.106 | 3.6912 | 1004 | 1.6241 | 0.3609 | 1.6241 | 1.2744 | | 0.106 | 3.6985 | 1006 | 1.5874 | 0.3582 | 1.5874 | 1.2599 | | 0.106 | 3.7059 | 1008 | 1.6943 | 0.3057 | 1.6943 | 1.3016 | | 0.106 | 3.7132 | 1010 | 1.8603 | 0.2876 | 1.8603 | 1.3639 | | 0.106 | 3.7206 | 1012 | 1.7777 | 0.2961 | 1.7777 | 1.3333 | | 0.106 | 3.7279 | 1014 | 1.6079 | 0.3499 | 1.6079 | 1.2680 | | 0.106 | 3.7353 | 1016 | 1.3860 | 0.3469 | 1.3860 | 1.1773 | | 0.106 | 3.7426 | 1018 | 1.1656 | 0.3888 | 1.1656 | 1.0796 | | 0.106 | 3.75 | 1020 | 1.0503 | 0.4315 | 1.0503 | 1.0249 | | 0.106 | 3.7574 | 1022 | 1.0647 | 0.4315 | 1.0647 | 1.0319 | | 0.106 | 3.7647 | 1024 | 1.1813 | 0.4119 | 1.1813 | 1.0869 | | 0.106 | 3.7721 | 1026 | 1.4307 | 0.3384 | 1.4307 | 1.1961 | | 0.106 | 3.7794 | 1028 | 1.6524 | 0.3228 | 1.6524 | 1.2855 | | 0.106 | 3.7868 | 1030 | 1.6903 | 0.3258 | 1.6903 | 1.3001 | | 0.106 | 3.7941 | 1032 | 1.6548 | 0.3258 | 1.6548 | 1.2864 | | 0.106 | 3.8015 | 1034 | 1.5683 | 0.3228 | 1.5683 | 1.2523 | | 0.106 | 3.8088 | 1036 | 1.5696 | 0.3228 | 1.5696 | 1.2529 | | 0.106 | 3.8162 | 1038 | 1.4676 | 0.3478 | 1.4676 | 1.2115 | | 0.106 | 3.8235 | 1040 | 1.4413 | 0.3592 | 1.4413 | 1.2005 | | 0.106 | 3.8309 | 1042 | 1.3123 | 0.3658 | 1.3123 | 1.1456 | | 0.106 | 3.8382 | 1044 | 1.1756 | 0.3888 | 1.1756 | 1.0842 | | 0.106 | 3.8456 | 1046 | 1.1741 | 0.3888 | 1.1741 | 1.0836 | | 0.106 | 3.8529 | 1048 | 1.1918 | 0.3888 | 1.1918 | 1.0917 | | 0.106 | 3.8603 | 1050 | 1.2505 | 0.3913 | 1.2505 | 1.1183 | | 0.106 | 3.8676 | 1052 | 1.2704 | 0.3686 | 1.2704 | 1.1271 | | 0.106 | 3.875 | 1054 | 1.3631 | 0.3439 | 1.3631 | 1.1675 | | 0.106 | 3.8824 | 1056 | 1.4957 | 0.2995 | 1.4957 | 1.2230 | | 0.106 | 3.8897 | 1058 | 1.6546 | 0.3135 | 1.6546 | 1.2863 | | 0.106 | 3.8971 | 1060 | 1.7736 | 0.2835 | 1.7736 | 1.3318 | | 0.106 | 3.9044 | 1062 | 1.7163 | 0.2932 | 1.7163 | 1.3101 | | 0.106 | 3.9118 | 1064 | 1.5496 | 0.3209 | 1.5496 | 1.2448 | | 0.106 | 3.9191 | 1066 | 1.3775 | 0.3106 | 1.3775 | 1.1737 | | 0.106 | 3.9265 | 1068 | 1.2748 | 0.3686 | 1.2748 | 1.1291 | | 0.106 | 3.9338 | 1070 | 1.2459 | 0.3913 | 1.2459 | 1.1162 | | 0.106 | 3.9412 | 1072 | 1.2464 | 0.3783 | 1.2464 | 1.1164 | | 0.106 | 3.9485 | 1074 | 1.2233 | 0.3913 | 1.2233 | 1.1060 | | 0.106 | 3.9559 | 1076 | 1.2454 | 0.3658 | 1.2454 | 1.1160 | | 0.106 | 3.9632 | 1078 | 1.3131 | 0.3288 | 1.3131 | 1.1459 | | 0.106 | 3.9706 | 1080 | 1.4180 | 0.3106 | 1.4180 | 1.1908 | | 0.106 | 3.9779 | 1082 | 1.4679 | 0.3175 | 1.4679 | 1.2116 | | 0.106 | 3.9853 | 1084 | 1.4497 | 0.3272 | 1.4497 | 1.2040 | | 0.106 | 3.9926 | 1086 | 1.4683 | 0.3619 | 1.4683 | 1.2118 | | 0.106 | 4.0 | 1088 | 1.5396 | 0.3485 | 1.5396 | 1.2408 | | 0.106 | 4.0074 | 1090 | 1.7686 | 0.3111 | 1.7686 | 1.3299 | | 0.106 | 4.0147 | 1092 | 2.0152 | 0.2622 | 2.0152 | 1.4196 | | 0.106 | 4.0221 | 1094 | 2.0777 | 0.2154 | 2.0777 | 1.4414 | | 0.106 | 4.0294 | 1096 | 1.8937 | 0.2419 | 1.8937 | 1.3761 | | 0.106 | 4.0368 | 1098 | 1.7425 | 0.2965 | 1.7425 | 1.3200 | | 0.106 | 4.0441 | 1100 | 1.6022 | 0.3303 | 1.6022 | 1.2658 | | 0.106 | 4.0515 | 1102 | 1.5155 | 0.3378 | 1.5155 | 1.2310 | | 0.106 | 4.0588 | 1104 | 1.4627 | 0.3512 | 1.4627 | 1.2094 | | 0.106 | 4.0662 | 1106 | 1.4250 | 0.3595 | 1.4250 | 1.1937 | | 0.106 | 4.0735 | 1108 | 1.2679 | 0.3777 | 1.2679 | 1.1260 | | 0.106 | 4.0809 | 1110 | 1.2116 | 0.3777 | 1.2116 | 1.1007 | | 0.106 | 4.0882 | 1112 | 1.2604 | 0.3711 | 1.2604 | 1.1227 | | 0.106 | 4.0956 | 1114 | 1.4174 | 0.3619 | 1.4174 | 1.1905 | | 0.106 | 4.1029 | 1116 | 1.6580 | 0.3258 | 1.6580 | 1.2877 | | 0.106 | 4.1103 | 1118 | 1.8114 | 0.2966 | 1.8114 | 1.3459 | | 0.106 | 4.1176 | 1120 | 1.8327 | 0.2966 | 1.8327 | 1.3538 | | 0.106 | 4.125 | 1122 | 1.7188 | 0.2896 | 1.7188 | 1.3110 | | 0.106 | 4.1324 | 1124 | 1.5116 | 0.3384 | 1.5116 | 1.2295 | | 0.106 | 4.1397 | 1126 | 1.2881 | 0.3439 | 1.2881 | 1.1350 | | 0.106 | 4.1471 | 1128 | 1.1817 | 0.3863 | 1.1817 | 1.0870 | | 0.106 | 4.1544 | 1130 | 1.1532 | 0.3863 | 1.1532 | 1.0739 | | 0.106 | 4.1618 | 1132 | 1.1965 | 0.3863 | 1.1965 | 1.0938 | | 0.106 | 4.1691 | 1134 | 1.3330 | 0.3321 | 1.3330 | 1.1545 | | 0.106 | 4.1765 | 1136 | 1.5663 | 0.3067 | 1.5663 | 1.2515 | | 0.106 | 4.1838 | 1138 | 1.7812 | 0.2932 | 1.7812 | 1.3346 | | 0.106 | 4.1912 | 1140 | 1.9065 | 0.2988 | 1.9065 | 1.3808 | | 0.106 | 4.1985 | 1142 | 1.8528 | 0.2993 | 1.8528 | 1.3612 | | 0.106 | 4.2059 | 1144 | 1.6607 | 0.3228 | 1.6607 | 1.2887 | | 0.106 | 4.2132 | 1146 | 1.5638 | 0.3471 | 1.5638 | 1.2505 | | 0.106 | 4.2206 | 1148 | 1.4705 | 0.3592 | 1.4705 | 1.2127 | | 0.106 | 4.2279 | 1150 | 1.4091 | 0.3592 | 1.4091 | 1.1870 | | 0.106 | 4.2353 | 1152 | 1.4608 | 0.3592 | 1.4608 | 1.2086 | | 0.106 | 4.2426 | 1154 | 1.4916 | 0.3506 | 1.4916 | 1.2213 | | 0.106 | 4.25 | 1156 | 1.5010 | 0.3303 | 1.5010 | 1.2252 | | 0.106 | 4.2574 | 1158 | 1.5337 | 0.3471 | 1.5337 | 1.2384 | | 0.106 | 4.2647 | 1160 | 1.5784 | 0.3135 | 1.5784 | 1.2563 | | 0.106 | 4.2721 | 1162 | 1.5900 | 0.3135 | 1.5900 | 1.2610 | | 0.106 | 4.2794 | 1164 | 1.6323 | 0.2896 | 1.6323 | 1.2776 | | 0.106 | 4.2868 | 1166 | 1.6069 | 0.2896 | 1.6069 | 1.2676 | | 0.106 | 4.2941 | 1168 | 1.6944 | 0.2896 | 1.6944 | 1.3017 | | 0.106 | 4.3015 | 1170 | 1.7742 | 0.2798 | 1.7742 | 1.3320 | | 0.106 | 4.3088 | 1172 | 1.8058 | 0.2740 | 1.8058 | 1.3438 | | 0.106 | 4.3162 | 1174 | 1.7650 | 0.2798 | 1.7650 | 1.3285 | | 0.106 | 4.3235 | 1176 | 1.7703 | 0.2798 | 1.7703 | 1.3305 | | 0.106 | 4.3309 | 1178 | 1.7456 | 0.2896 | 1.7456 | 1.3212 | | 0.106 | 4.3382 | 1180 | 1.6619 | 0.2896 | 1.6619 | 1.2891 | | 0.106 | 4.3456 | 1182 | 1.5527 | 0.3067 | 1.5527 | 1.2461 | | 0.106 | 4.3529 | 1184 | 1.5563 | 0.3067 | 1.5563 | 1.2475 | | 0.106 | 4.3603 | 1186 | 1.5291 | 0.3032 | 1.5291 | 1.2366 | | 0.106 | 4.3676 | 1188 | 1.5045 | 0.3032 | 1.5045 | 1.2266 | | 0.106 | 4.375 | 1190 | 1.4368 | 0.2995 | 1.4368 | 1.1986 | | 0.106 | 4.3824 | 1192 | 1.4100 | 0.3353 | 1.4100 | 1.1874 | | 0.106 | 4.3897 | 1194 | 1.5429 | 0.3067 | 1.5429 | 1.2421 | | 0.106 | 4.3971 | 1196 | 1.5980 | 0.2723 | 1.5980 | 1.2641 | | 0.106 | 4.4044 | 1198 | 1.7160 | 0.2798 | 1.7160 | 1.3100 | | 0.106 | 4.4118 | 1200 | 1.8708 | 0.2559 | 1.8708 | 1.3678 | | 0.106 | 4.4191 | 1202 | 1.9926 | 0.2583 | 1.9926 | 1.4116 | | 0.106 | 4.4265 | 1204 | 1.9632 | 0.2583 | 1.9632 | 1.4012 | | 0.106 | 4.4338 | 1206 | 1.8504 | 0.2510 | 1.8504 | 1.3603 | | 0.106 | 4.4412 | 1208 | 1.7136 | 0.2740 | 1.7136 | 1.3091 | | 0.106 | 4.4485 | 1210 | 1.5654 | 0.2962 | 1.5654 | 1.2512 | | 0.106 | 4.4559 | 1212 | 1.5821 | 0.2962 | 1.5821 | 1.2578 | | 0.106 | 4.4632 | 1214 | 1.6108 | 0.2962 | 1.6108 | 1.2692 | | 0.106 | 4.4706 | 1216 | 1.6960 | 0.2532 | 1.6960 | 1.3023 | | 0.106 | 4.4779 | 1218 | 1.7508 | 0.2740 | 1.7508 | 1.3232 | | 0.106 | 4.4853 | 1220 | 1.7220 | 0.2798 | 1.7220 | 1.3123 | | 0.106 | 4.4926 | 1222 | 1.6420 | 0.2626 | 1.6420 | 1.2814 | | 0.106 | 4.5 | 1224 | 1.4733 | 0.3106 | 1.4733 | 1.2138 | | 0.106 | 4.5074 | 1226 | 1.3559 | 0.3686 | 1.3559 | 1.1644 | | 0.106 | 4.5147 | 1228 | 1.2926 | 0.3658 | 1.2926 | 1.1369 | | 0.106 | 4.5221 | 1230 | 1.2552 | 0.3658 | 1.2552 | 1.1204 | | 0.106 | 4.5294 | 1232 | 1.3049 | 0.3304 | 1.3049 | 1.1423 | | 0.106 | 4.5368 | 1234 | 1.3662 | 0.3220 | 1.3662 | 1.1689 | | 0.106 | 4.5441 | 1236 | 1.4148 | 0.3106 | 1.4148 | 1.1895 | | 0.106 | 4.5515 | 1238 | 1.4669 | 0.3141 | 1.4669 | 1.2111 | | 0.106 | 4.5588 | 1240 | 1.4492 | 0.3141 | 1.4492 | 1.2038 | | 0.106 | 4.5662 | 1242 | 1.4790 | 0.3032 | 1.4790 | 1.2161 | | 0.106 | 4.5735 | 1244 | 1.6160 | 0.2962 | 1.6160 | 1.2712 | | 0.106 | 4.5809 | 1246 | 1.7513 | 0.2703 | 1.7513 | 1.3234 | | 0.106 | 4.5882 | 1248 | 1.7968 | 0.2648 | 1.7968 | 1.3405 | | 0.106 | 4.5956 | 1250 | 1.7726 | 0.2703 | 1.7726 | 1.3314 | | 0.106 | 4.6029 | 1252 | 1.7148 | 0.2532 | 1.7148 | 1.3095 | | 0.106 | 4.6103 | 1254 | 1.7660 | 0.2740 | 1.7660 | 1.3289 | | 0.106 | 4.6176 | 1256 | 1.8187 | 0.2559 | 1.8187 | 1.3486 | | 0.106 | 4.625 | 1258 | 1.8207 | 0.2559 | 1.8207 | 1.3494 | | 0.106 | 4.6324 | 1260 | 1.8980 | 0.2633 | 1.8980 | 1.3777 | | 0.106 | 4.6397 | 1262 | 1.8377 | 0.2559 | 1.8377 | 1.3556 | | 0.106 | 4.6471 | 1264 | 1.8508 | 0.2559 | 1.8508 | 1.3604 | | 0.106 | 4.6544 | 1266 | 1.9237 | 0.2583 | 1.9237 | 1.3870 | | 0.106 | 4.6618 | 1268 | 1.9292 | 0.2583 | 1.9292 | 1.3889 | | 0.106 | 4.6691 | 1270 | 1.8646 | 0.2633 | 1.8646 | 1.3655 | | 0.106 | 4.6765 | 1272 | 1.8467 | 0.2633 | 1.8467 | 1.3589 | | 0.106 | 4.6838 | 1274 | 1.7571 | 0.2648 | 1.7571 | 1.3256 | | 0.106 | 4.6912 | 1276 | 1.7515 | 0.2648 | 1.7515 | 1.3234 | | 0.106 | 4.6985 | 1278 | 1.8365 | 0.2633 | 1.8365 | 1.3552 | | 0.106 | 4.7059 | 1280 | 1.8606 | 0.2547 | 1.8606 | 1.3640 | | 0.106 | 4.7132 | 1282 | 1.7612 | 0.2685 | 1.7612 | 1.3271 | | 0.106 | 4.7206 | 1284 | 1.5939 | 0.3156 | 1.5939 | 1.2625 | | 0.106 | 4.7279 | 1286 | 1.3894 | 0.3469 | 1.3894 | 1.1787 | | 0.106 | 4.7353 | 1288 | 1.3259 | 0.3469 | 1.3259 | 1.1515 | | 0.106 | 4.7426 | 1290 | 1.2506 | 0.3863 | 1.2506 | 1.1183 | | 0.106 | 4.75 | 1292 | 1.2502 | 0.3863 | 1.2502 | 1.1181 | | 0.106 | 4.7574 | 1294 | 1.3303 | 0.3469 | 1.3303 | 1.1534 | | 0.106 | 4.7647 | 1296 | 1.5061 | 0.3353 | 1.5061 | 1.2272 | | 0.106 | 4.7721 | 1298 | 1.6298 | 0.3392 | 1.6298 | 1.2766 | | 0.106 | 4.7794 | 1300 | 1.6437 | 0.3392 | 1.6437 | 1.2820 | | 0.106 | 4.7868 | 1302 | 1.5539 | 0.3414 | 1.5539 | 1.2465 | | 0.106 | 4.7941 | 1304 | 1.4721 | 0.3353 | 1.4721 | 1.2133 | | 0.106 | 4.8015 | 1306 | 1.4599 | 0.3353 | 1.4599 | 1.2083 | | 0.106 | 4.8088 | 1308 | 1.4360 | 0.3353 | 1.4360 | 1.1983 | | 0.106 | 4.8162 | 1310 | 1.3737 | 0.3685 | 1.3737 | 1.1721 | | 0.106 | 4.8235 | 1312 | 1.3575 | 0.3685 | 1.3575 | 1.1651 | | 0.106 | 4.8309 | 1314 | 1.4576 | 0.3761 | 1.4576 | 1.2073 | | 0.106 | 4.8382 | 1316 | 1.6437 | 0.3187 | 1.6437 | 1.2821 | | 0.106 | 4.8456 | 1318 | 1.8494 | 0.2789 | 1.8494 | 1.3599 | | 0.106 | 4.8529 | 1320 | 1.9769 | 0.2491 | 1.9769 | 1.4060 | | 0.106 | 4.8603 | 1322 | 1.9129 | 0.2456 | 1.9129 | 1.3831 | | 0.106 | 4.8676 | 1324 | 1.8041 | 0.2933 | 1.8041 | 1.3432 | | 0.106 | 4.875 | 1326 | 1.6743 | 0.3288 | 1.6743 | 1.2939 | | 0.106 | 4.8824 | 1328 | 1.5598 | 0.3414 | 1.5598 | 1.2489 | | 0.106 | 4.8897 | 1330 | 1.4512 | 0.3353 | 1.4512 | 1.2046 | | 0.106 | 4.8971 | 1332 | 1.4473 | 0.3353 | 1.4473 | 1.2031 | | 0.106 | 4.9044 | 1334 | 1.4595 | 0.3353 | 1.4595 | 1.2081 | | 0.106 | 4.9118 | 1336 | 1.5112 | 0.3414 | 1.5112 | 1.2293 | | 0.106 | 4.9191 | 1338 | 1.5454 | 0.3414 | 1.5454 | 1.2432 | | 0.106 | 4.9265 | 1340 | 1.6817 | 0.3187 | 1.6817 | 1.2968 | | 0.106 | 4.9338 | 1342 | 1.8096 | 0.2721 | 1.8096 | 1.3452 | | 0.106 | 4.9412 | 1344 | 1.9306 | 0.2536 | 1.9306 | 1.3895 | | 0.106 | 4.9485 | 1346 | 1.9531 | 0.2853 | 1.9531 | 1.3975 | | 0.106 | 4.9559 | 1348 | 1.9171 | 0.2821 | 1.9171 | 1.3846 | | 0.106 | 4.9632 | 1350 | 1.8077 | 0.3086 | 1.8077 | 1.3445 | | 0.106 | 4.9706 | 1352 | 1.6823 | 0.3089 | 1.6823 | 1.2970 | | 0.106 | 4.9779 | 1354 | 1.5084 | 0.3452 | 1.5084 | 1.2282 | | 0.106 | 4.9853 | 1356 | 1.4252 | 0.3645 | 1.4252 | 1.1938 | | 0.106 | 4.9926 | 1358 | 1.4376 | 0.3645 | 1.4376 | 1.1990 | | 0.106 | 5.0 | 1360 | 1.5466 | 0.3258 | 1.5466 | 1.2436 | | 0.106 | 5.0074 | 1362 | 1.6601 | 0.3187 | 1.6601 | 1.2884 | | 0.106 | 5.0147 | 1364 | 1.6480 | 0.3187 | 1.6480 | 1.2838 | | 0.106 | 5.0221 | 1366 | 1.5061 | 0.3471 | 1.5061 | 1.2272 | | 0.106 | 5.0294 | 1368 | 1.3909 | 0.3353 | 1.3909 | 1.1794 | | 0.106 | 5.0368 | 1370 | 1.3542 | 0.3469 | 1.3542 | 1.1637 | | 0.106 | 5.0441 | 1372 | 1.3797 | 0.3469 | 1.3797 | 1.1746 | | 0.106 | 5.0515 | 1374 | 1.4113 | 0.3469 | 1.4113 | 1.1880 | | 0.106 | 5.0588 | 1376 | 1.4791 | 0.2995 | 1.4791 | 1.2162 | | 0.106 | 5.0662 | 1378 | 1.5114 | 0.3032 | 1.5114 | 1.2294 | | 0.106 | 5.0735 | 1380 | 1.5301 | 0.3032 | 1.5301 | 1.2370 | | 0.106 | 5.0809 | 1382 | 1.5514 | 0.3032 | 1.5514 | 1.2456 | | 0.106 | 5.0882 | 1384 | 1.5845 | 0.3032 | 1.5845 | 1.2587 | | 0.106 | 5.0956 | 1386 | 1.6348 | 0.3101 | 1.6348 | 1.2786 | | 0.106 | 5.1029 | 1388 | 1.6950 | 0.2798 | 1.6950 | 1.3019 | | 0.106 | 5.1103 | 1390 | 1.6691 | 0.2896 | 1.6691 | 1.2919 | | 0.106 | 5.1176 | 1392 | 1.5854 | 0.3443 | 1.5854 | 1.2591 | | 0.106 | 5.125 | 1394 | 1.5512 | 0.3443 | 1.5512 | 1.2455 | | 0.106 | 5.1324 | 1396 | 1.5849 | 0.3499 | 1.5849 | 1.2589 | | 0.106 | 5.1397 | 1398 | 1.6929 | 0.3057 | 1.6929 | 1.3011 | | 0.106 | 5.1471 | 1400 | 1.8082 | 0.2933 | 1.8082 | 1.3447 | | 0.106 | 5.1544 | 1402 | 1.8446 | 0.2965 | 1.8446 | 1.3582 | | 0.106 | 5.1618 | 1404 | 1.7648 | 0.2901 | 1.7648 | 1.3285 | | 0.106 | 5.1691 | 1406 | 1.6964 | 0.3057 | 1.6964 | 1.3025 | | 0.106 | 5.1765 | 1408 | 1.6307 | 0.3156 | 1.6307 | 1.2770 | | 0.106 | 5.1838 | 1410 | 1.5993 | 0.3392 | 1.5993 | 1.2646 | | 0.106 | 5.1912 | 1412 | 1.5680 | 0.3609 | 1.5680 | 1.2522 | | 0.106 | 5.1985 | 1414 | 1.5691 | 0.3443 | 1.5691 | 1.2526 | | 0.106 | 5.2059 | 1416 | 1.5651 | 0.3101 | 1.5651 | 1.2511 | | 0.106 | 5.2132 | 1418 | 1.5675 | 0.3101 | 1.5675 | 1.2520 | | 0.106 | 5.2206 | 1420 | 1.5824 | 0.2997 | 1.5824 | 1.2579 | | 0.106 | 5.2279 | 1422 | 1.5716 | 0.3101 | 1.5716 | 1.2537 | | 0.106 | 5.2353 | 1424 | 1.6340 | 0.2966 | 1.6340 | 1.2783 | | 0.106 | 5.2426 | 1426 | 1.7327 | 0.2648 | 1.7327 | 1.3163 | | 0.106 | 5.25 | 1428 | 1.7401 | 0.2559 | 1.7401 | 1.3191 | | 0.106 | 5.2574 | 1430 | 1.6535 | 0.2740 | 1.6535 | 1.2859 | | 0.106 | 5.2647 | 1432 | 1.5806 | 0.3065 | 1.5806 | 1.2572 | | 0.106 | 5.2721 | 1434 | 1.6014 | 0.2835 | 1.6014 | 1.2655 | | 0.106 | 5.2794 | 1436 | 1.6139 | 0.3156 | 1.6139 | 1.2704 | | 0.106 | 5.2868 | 1438 | 1.6381 | 0.3156 | 1.6381 | 1.2799 | | 0.106 | 5.2941 | 1440 | 1.6687 | 0.2961 | 1.6687 | 1.2918 | | 0.106 | 5.3015 | 1442 | 1.7396 | 0.2933 | 1.7396 | 1.3189 | | 0.106 | 5.3088 | 1444 | 1.7429 | 0.2933 | 1.7429 | 1.3202 | | 0.106 | 5.3162 | 1446 | 1.7160 | 0.2867 | 1.7160 | 1.3100 | | 0.106 | 5.3235 | 1448 | 1.7488 | 0.2933 | 1.7488 | 1.3224 | | 0.106 | 5.3309 | 1450 | 1.8074 | 0.2583 | 1.8074 | 1.3444 | | 0.106 | 5.3382 | 1452 | 1.7952 | 0.2633 | 1.7952 | 1.3399 | | 0.106 | 5.3456 | 1454 | 1.7040 | 0.2648 | 1.7040 | 1.3054 | | 0.106 | 5.3529 | 1456 | 1.6502 | 0.2648 | 1.6502 | 1.2846 | | 0.106 | 5.3603 | 1458 | 1.5326 | 0.3067 | 1.5326 | 1.2380 | | 0.106 | 5.3676 | 1460 | 1.4549 | 0.3353 | 1.4549 | 1.2062 | | 0.106 | 5.375 | 1462 | 1.4361 | 0.3353 | 1.4361 | 1.1984 | | 0.106 | 5.3824 | 1464 | 1.4527 | 0.2995 | 1.4527 | 1.2053 | | 0.106 | 5.3897 | 1466 | 1.5063 | 0.3067 | 1.5063 | 1.2273 | | 0.106 | 5.3971 | 1468 | 1.5542 | 0.2997 | 1.5542 | 1.2467 | | 0.106 | 5.4044 | 1470 | 1.5072 | 0.3067 | 1.5072 | 1.2277 | | 0.106 | 5.4118 | 1472 | 1.4903 | 0.3067 | 1.4903 | 1.2208 | | 0.106 | 5.4191 | 1474 | 1.5545 | 0.2997 | 1.5545 | 1.2468 | | 0.106 | 5.4265 | 1476 | 1.6071 | 0.2896 | 1.6071 | 1.2677 | | 0.106 | 5.4338 | 1478 | 1.6314 | 0.2966 | 1.6314 | 1.2773 | | 0.106 | 5.4412 | 1480 | 1.6387 | 0.2966 | 1.6387 | 1.2801 | | 0.106 | 5.4485 | 1482 | 1.6295 | 0.2798 | 1.6295 | 1.2765 | | 0.106 | 5.4559 | 1484 | 1.6107 | 0.2997 | 1.6107 | 1.2691 | | 0.106 | 5.4632 | 1486 | 1.5912 | 0.2997 | 1.5912 | 1.2614 | | 0.106 | 5.4706 | 1488 | 1.5638 | 0.2997 | 1.5638 | 1.2505 | | 0.106 | 5.4779 | 1490 | 1.6182 | 0.2896 | 1.6182 | 1.2721 | | 0.106 | 5.4853 | 1492 | 1.7234 | 0.2870 | 1.7234 | 1.3128 | | 0.106 | 5.4926 | 1494 | 1.6987 | 0.2966 | 1.6987 | 1.3034 | | 0.106 | 5.5 | 1496 | 1.6364 | 0.2798 | 1.6364 | 1.2792 | | 0.106 | 5.5074 | 1498 | 1.5799 | 0.2997 | 1.5799 | 1.2570 | | 0.074 | 5.5147 | 1500 | 1.5969 | 0.2896 | 1.5969 | 1.2637 | | 0.074 | 5.5221 | 1502 | 1.6228 | 0.2798 | 1.6228 | 1.2739 | | 0.074 | 5.5294 | 1504 | 1.6176 | 0.2798 | 1.6176 | 1.2719 | | 0.074 | 5.5368 | 1506 | 1.6445 | 0.2966 | 1.6445 | 1.2824 | | 0.074 | 5.5441 | 1508 | 1.6886 | 0.2740 | 1.6886 | 1.2995 | | 0.074 | 5.5515 | 1510 | 1.7108 | 0.2740 | 1.7108 | 1.3080 | | 0.074 | 5.5588 | 1512 | 1.7538 | 0.2648 | 1.7538 | 1.3243 | | 0.074 | 5.5662 | 1514 | 1.8032 | 0.2559 | 1.8032 | 1.3428 | | 0.074 | 5.5735 | 1516 | 1.8619 | 0.2559 | 1.8619 | 1.3645 | | 0.074 | 5.5809 | 1518 | 1.9541 | 0.2703 | 1.9541 | 1.3979 | | 0.074 | 5.5882 | 1520 | 2.0048 | 0.2618 | 2.0048 | 1.4159 | | 0.074 | 5.5956 | 1522 | 1.9583 | 0.2703 | 1.9583 | 1.3994 | | 0.074 | 5.6029 | 1524 | 1.8222 | 0.2559 | 1.8222 | 1.3499 | | 0.074 | 5.6103 | 1526 | 1.6746 | 0.2740 | 1.6746 | 1.2940 | | 0.074 | 5.6176 | 1528 | 1.4819 | 0.3272 | 1.4819 | 1.2173 | | 0.074 | 5.625 | 1530 | 1.3601 | 0.3469 | 1.3601 | 1.1662 | | 0.074 | 5.6324 | 1532 | 1.3275 | 0.3469 | 1.3275 | 1.1522 | | 0.074 | 5.6397 | 1534 | 1.3503 | 0.3469 | 1.3503 | 1.1620 | | 0.074 | 5.6471 | 1536 | 1.4461 | 0.3032 | 1.4461 | 1.2025 | | 0.074 | 5.6544 | 1538 | 1.5705 | 0.3032 | 1.5705 | 1.2532 | | 0.074 | 5.6618 | 1540 | 1.7264 | 0.2740 | 1.7264 | 1.3139 | | 0.074 | 5.6691 | 1542 | 1.8515 | 0.2559 | 1.8515 | 1.3607 | | 0.074 | 5.6765 | 1544 | 1.9530 | 0.2703 | 1.9530 | 1.3975 | | 0.074 | 5.6838 | 1546 | 2.0396 | 0.2618 | 2.0396 | 1.4282 | | 0.074 | 5.6912 | 1548 | 2.0016 | 0.2618 | 2.0016 | 1.4148 | | 0.074 | 5.6985 | 1550 | 1.9222 | 0.2703 | 1.9222 | 1.3864 | | 0.074 | 5.7059 | 1552 | 1.8233 | 0.2596 | 1.8233 | 1.3503 | | 0.074 | 5.7132 | 1554 | 1.7377 | 0.2740 | 1.7377 | 1.3182 | | 0.074 | 5.7206 | 1556 | 1.7238 | 0.2740 | 1.7238 | 1.3129 | | 0.074 | 5.7279 | 1558 | 1.6375 | 0.3057 | 1.6375 | 1.2796 | | 0.074 | 5.7353 | 1560 | 1.5580 | 0.3156 | 1.5580 | 1.2482 | | 0.074 | 5.7426 | 1562 | 1.5661 | 0.3156 | 1.5661 | 1.2515 | | 0.074 | 5.75 | 1564 | 1.6082 | 0.2835 | 1.6082 | 1.2681 | | 0.074 | 5.7574 | 1566 | 1.6560 | 0.2835 | 1.6560 | 1.2868 | | 0.074 | 5.7647 | 1568 | 1.6586 | 0.2835 | 1.6586 | 1.2879 | | 0.074 | 5.7721 | 1570 | 1.7213 | 0.2776 | 1.7213 | 1.3120 | | 0.074 | 5.7794 | 1572 | 1.8240 | 0.2685 | 1.8240 | 1.3505 | | 0.074 | 5.7868 | 1574 | 1.8753 | 0.2755 | 1.8753 | 1.3694 | | 0.074 | 5.7941 | 1576 | 1.9196 | 0.2736 | 1.9196 | 1.3855 | | 0.074 | 5.8015 | 1578 | 1.8527 | 0.2685 | 1.8527 | 1.3611 | | 0.074 | 5.8088 | 1580 | 1.7213 | 0.2740 | 1.7213 | 1.3120 | | 0.074 | 5.8162 | 1582 | 1.5473 | 0.3135 | 1.5473 | 1.2439 | | 0.074 | 5.8235 | 1584 | 1.4466 | 0.3319 | 1.4466 | 1.2027 | | 0.074 | 5.8309 | 1586 | 1.3966 | 0.3141 | 1.3966 | 1.1818 | | 0.074 | 5.8382 | 1588 | 1.4121 | 0.3141 | 1.4121 | 1.1883 | | 0.074 | 5.8456 | 1590 | 1.4515 | 0.3141 | 1.4515 | 1.2048 | | 0.074 | 5.8529 | 1592 | 1.4598 | 0.3254 | 1.4598 | 1.2082 | | 0.074 | 5.8603 | 1594 | 1.4399 | 0.3254 | 1.4399 | 1.2000 | | 0.074 | 5.8676 | 1596 | 1.4587 | 0.3254 | 1.4587 | 1.2078 | | 0.074 | 5.875 | 1598 | 1.5584 | 0.3101 | 1.5584 | 1.2483 | | 0.074 | 5.8824 | 1600 | 1.6888 | 0.2966 | 1.6888 | 1.2995 | | 0.074 | 5.8897 | 1602 | 1.7313 | 0.2966 | 1.7313 | 1.3158 | | 0.074 | 5.8971 | 1604 | 1.6922 | 0.2966 | 1.6922 | 1.3008 | | 0.074 | 5.9044 | 1606 | 1.6150 | 0.2966 | 1.6150 | 1.2708 | | 0.074 | 5.9118 | 1608 | 1.5553 | 0.3032 | 1.5553 | 1.2471 | | 0.074 | 5.9191 | 1610 | 1.4877 | 0.3697 | 1.4877 | 1.2197 | | 0.074 | 5.9265 | 1612 | 1.4552 | 0.3671 | 1.4552 | 1.2063 | | 0.074 | 5.9338 | 1614 | 1.4464 | 0.3697 | 1.4464 | 1.2027 | | 0.074 | 5.9412 | 1616 | 1.5040 | 0.3471 | 1.5040 | 1.2264 | | 0.074 | 5.9485 | 1618 | 1.5137 | 0.3258 | 1.5137 | 1.2303 | | 0.074 | 5.9559 | 1620 | 1.5116 | 0.3258 | 1.5116 | 1.2295 | | 0.074 | 5.9632 | 1622 | 1.4699 | 0.3582 | 1.4699 | 1.2124 | | 0.074 | 5.9706 | 1624 | 1.4485 | 0.3582 | 1.4485 | 1.2036 | | 0.074 | 5.9779 | 1626 | 1.4609 | 0.3241 | 1.4609 | 1.2087 | | 0.074 | 5.9853 | 1628 | 1.4623 | 0.3032 | 1.4623 | 1.2093 | | 0.074 | 5.9926 | 1630 | 1.4292 | 0.3141 | 1.4292 | 1.1955 | | 0.074 | 6.0 | 1632 | 1.3694 | 0.3499 | 1.3694 | 1.1702 | | 0.074 | 6.0074 | 1634 | 1.3096 | 0.3530 | 1.3096 | 1.1444 | | 0.074 | 6.0147 | 1636 | 1.2596 | 0.3888 | 1.2596 | 1.1223 | | 0.074 | 6.0221 | 1638 | 1.2354 | 0.3863 | 1.2354 | 1.1115 | | 0.074 | 6.0294 | 1640 | 1.2123 | 0.3836 | 1.2123 | 1.1011 | | 0.074 | 6.0368 | 1642 | 1.2225 | 0.3836 | 1.2225 | 1.1057 | | 0.074 | 6.0441 | 1644 | 1.2854 | 0.3757 | 1.2854 | 1.1337 | | 0.074 | 6.0515 | 1646 | 1.3845 | 0.3618 | 1.3845 | 1.1767 | | 0.074 | 6.0588 | 1648 | 1.4465 | 0.3141 | 1.4465 | 1.2027 | | 0.074 | 6.0662 | 1650 | 1.4989 | 0.3032 | 1.4989 | 1.2243 | | 0.074 | 6.0735 | 1652 | 1.5156 | 0.3032 | 1.5156 | 1.2311 | | 0.074 | 6.0809 | 1654 | 1.4924 | 0.3032 | 1.4924 | 1.2216 | | 0.074 | 6.0882 | 1656 | 1.4968 | 0.3032 | 1.4968 | 1.2234 | | 0.074 | 6.0956 | 1658 | 1.4775 | 0.3032 | 1.4775 | 1.2155 | | 0.074 | 6.1029 | 1660 | 1.4445 | 0.3141 | 1.4445 | 1.2019 | | 0.074 | 6.1103 | 1662 | 1.3879 | 0.3254 | 1.3879 | 1.1781 | | 0.074 | 6.1176 | 1664 | 1.3860 | 0.3254 | 1.3860 | 1.1773 | | 0.074 | 6.125 | 1666 | 1.4192 | 0.3141 | 1.4192 | 1.1913 | | 0.074 | 6.1324 | 1668 | 1.4931 | 0.3067 | 1.4931 | 1.2219 | | 0.074 | 6.1397 | 1670 | 1.5621 | 0.2761 | 1.5621 | 1.2498 | | 0.074 | 6.1471 | 1672 | 1.5905 | 0.2932 | 1.5905 | 1.2611 | | 0.074 | 6.1544 | 1674 | 1.5612 | 0.2761 | 1.5612 | 1.2495 | | 0.074 | 6.1618 | 1676 | 1.4792 | 0.3067 | 1.4792 | 1.2162 | | 0.074 | 6.1691 | 1678 | 1.3745 | 0.3499 | 1.3745 | 1.1724 | | 0.074 | 6.1765 | 1680 | 1.2702 | 0.3783 | 1.2702 | 1.1270 | | 0.074 | 6.1838 | 1682 | 1.2254 | 0.3757 | 1.2254 | 1.1070 | | 0.074 | 6.1912 | 1684 | 1.2226 | 0.3757 | 1.2226 | 1.1057 | | 0.074 | 6.1985 | 1686 | 1.2756 | 0.3783 | 1.2756 | 1.1294 | | 0.074 | 6.2059 | 1688 | 1.3759 | 0.3711 | 1.3759 | 1.1730 | | 0.074 | 6.2132 | 1690 | 1.4301 | 0.3414 | 1.4301 | 1.1959 | | 0.074 | 6.2206 | 1692 | 1.5047 | 0.3032 | 1.5047 | 1.2266 | | 0.074 | 6.2279 | 1694 | 1.5801 | 0.2966 | 1.5801 | 1.2570 | | 0.074 | 6.2353 | 1696 | 1.6030 | 0.2966 | 1.6030 | 1.2661 | | 0.074 | 6.2426 | 1698 | 1.5889 | 0.2966 | 1.5889 | 1.2605 | | 0.074 | 6.25 | 1700 | 1.6294 | 0.2966 | 1.6294 | 1.2765 | | 0.074 | 6.2574 | 1702 | 1.6416 | 0.2966 | 1.6416 | 1.2813 | | 0.074 | 6.2647 | 1704 | 1.6289 | 0.2966 | 1.6289 | 1.2763 | | 0.074 | 6.2721 | 1706 | 1.6773 | 0.2966 | 1.6773 | 1.2951 | | 0.074 | 6.2794 | 1708 | 1.7352 | 0.2740 | 1.7352 | 1.3173 | | 0.074 | 6.2868 | 1710 | 1.7132 | 0.2740 | 1.7132 | 1.3089 | | 0.074 | 6.2941 | 1712 | 1.5980 | 0.2966 | 1.5980 | 1.2641 | | 0.074 | 6.3015 | 1714 | 1.4993 | 0.3258 | 1.4993 | 1.2245 | | 0.074 | 6.3088 | 1716 | 1.4778 | 0.3363 | 1.4778 | 1.2157 | | 0.074 | 6.3162 | 1718 | 1.4949 | 0.3258 | 1.4949 | 1.2227 | | 0.074 | 6.3235 | 1720 | 1.4779 | 0.3303 | 1.4779 | 1.2157 | | 0.074 | 6.3309 | 1722 | 1.4599 | 0.3067 | 1.4599 | 1.2083 | | 0.074 | 6.3382 | 1724 | 1.4250 | 0.3414 | 1.4250 | 1.1937 | | 0.074 | 6.3456 | 1726 | 1.3831 | 0.3499 | 1.3831 | 1.1760 | | 0.074 | 6.3529 | 1728 | 1.3073 | 0.3589 | 1.3073 | 1.1434 | | 0.074 | 6.3603 | 1730 | 1.2538 | 0.3783 | 1.2538 | 1.1198 | | 0.074 | 6.3676 | 1732 | 1.2427 | 0.3783 | 1.2427 | 1.1148 | | 0.074 | 6.375 | 1734 | 1.2556 | 0.3783 | 1.2556 | 1.1205 | | 0.074 | 6.3824 | 1736 | 1.3503 | 0.3711 | 1.3503 | 1.1620 | | 0.074 | 6.3897 | 1738 | 1.4770 | 0.3334 | 1.4770 | 1.2153 | | 0.074 | 6.3971 | 1740 | 1.5800 | 0.2966 | 1.5800 | 1.2570 | | 0.074 | 6.4044 | 1742 | 1.6536 | 0.2966 | 1.6536 | 1.2859 | | 0.074 | 6.4118 | 1744 | 1.7012 | 0.2740 | 1.7012 | 1.3043 | | 0.074 | 6.4191 | 1746 | 1.7159 | 0.2740 | 1.7159 | 1.3099 | | 0.074 | 6.4265 | 1748 | 1.7104 | 0.2740 | 1.7104 | 1.3078 | | 0.074 | 6.4338 | 1750 | 1.7313 | 0.2740 | 1.7313 | 1.3158 | | 0.074 | 6.4412 | 1752 | 1.7165 | 0.2740 | 1.7165 | 1.3101 | | 0.074 | 6.4485 | 1754 | 1.6729 | 0.2740 | 1.6729 | 1.2934 | | 0.074 | 6.4559 | 1756 | 1.6085 | 0.2966 | 1.6085 | 1.2683 | | 0.074 | 6.4632 | 1758 | 1.6218 | 0.2966 | 1.6218 | 1.2735 | | 0.074 | 6.4706 | 1760 | 1.5959 | 0.2966 | 1.5959 | 1.2633 | | 0.074 | 6.4779 | 1762 | 1.5951 | 0.2966 | 1.5951 | 1.2630 | | 0.074 | 6.4853 | 1764 | 1.5674 | 0.2966 | 1.5674 | 1.2519 | | 0.074 | 6.4926 | 1766 | 1.4826 | 0.2896 | 1.4826 | 1.2176 | | 0.074 | 6.5 | 1768 | 1.4187 | 0.2926 | 1.4187 | 1.1911 | | 0.074 | 6.5074 | 1770 | 1.3899 | 0.3032 | 1.3899 | 1.1789 | | 0.074 | 6.5147 | 1772 | 1.3976 | 0.3032 | 1.3976 | 1.1822 | | 0.074 | 6.5221 | 1774 | 1.3961 | 0.3032 | 1.3961 | 1.1816 | | 0.074 | 6.5294 | 1776 | 1.4371 | 0.3032 | 1.4371 | 1.1988 | | 0.074 | 6.5368 | 1778 | 1.5102 | 0.2997 | 1.5102 | 1.2289 | | 0.074 | 6.5441 | 1780 | 1.5898 | 0.2966 | 1.5898 | 1.2609 | | 0.074 | 6.5515 | 1782 | 1.5718 | 0.2966 | 1.5718 | 1.2537 | | 0.074 | 6.5588 | 1784 | 1.5746 | 0.2966 | 1.5746 | 1.2548 | | 0.074 | 6.5662 | 1786 | 1.5340 | 0.2997 | 1.5340 | 1.2385 | | 0.074 | 6.5735 | 1788 | 1.4695 | 0.3101 | 1.4695 | 1.2122 | | 0.074 | 6.5809 | 1790 | 1.3901 | 0.3499 | 1.3901 | 1.1790 | | 0.074 | 6.5882 | 1792 | 1.3325 | 0.3469 | 1.3325 | 1.1544 | | 0.074 | 6.5956 | 1794 | 1.3379 | 0.3469 | 1.3379 | 1.1567 | | 0.074 | 6.6029 | 1796 | 1.3714 | 0.3469 | 1.3714 | 1.1710 | | 0.074 | 6.6103 | 1798 | 1.3914 | 0.3469 | 1.3914 | 1.1796 | | 0.074 | 6.6176 | 1800 | 1.4407 | 0.3032 | 1.4407 | 1.2003 | | 0.074 | 6.625 | 1802 | 1.5214 | 0.2997 | 1.5214 | 1.2334 | | 0.074 | 6.6324 | 1804 | 1.6132 | 0.2966 | 1.6132 | 1.2701 | | 0.074 | 6.6397 | 1806 | 1.6723 | 0.2966 | 1.6723 | 1.2932 | | 0.074 | 6.6471 | 1808 | 1.6685 | 0.2966 | 1.6685 | 1.2917 | | 0.074 | 6.6544 | 1810 | 1.5798 | 0.2966 | 1.5798 | 1.2569 | | 0.074 | 6.6618 | 1812 | 1.4776 | 0.3135 | 1.4776 | 1.2155 | | 0.074 | 6.6691 | 1814 | 1.3989 | 0.3499 | 1.3989 | 1.1827 | | 0.074 | 6.6765 | 1816 | 1.3361 | 0.3469 | 1.3361 | 1.1559 | | 0.074 | 6.6838 | 1818 | 1.3243 | 0.3589 | 1.3243 | 1.1508 | | 0.074 | 6.6912 | 1820 | 1.3089 | 0.3589 | 1.3089 | 1.1441 | | 0.074 | 6.6985 | 1822 | 1.3390 | 0.3589 | 1.3390 | 1.1572 | | 0.074 | 6.7059 | 1824 | 1.3497 | 0.3589 | 1.3497 | 1.1618 | | 0.074 | 6.7132 | 1826 | 1.3588 | 0.3589 | 1.3588 | 1.1657 | | 0.074 | 6.7206 | 1828 | 1.3481 | 0.3589 | 1.3481 | 1.1611 | | 0.074 | 6.7279 | 1830 | 1.3715 | 0.3589 | 1.3715 | 1.1711 | | 0.074 | 6.7353 | 1832 | 1.4516 | 0.3384 | 1.4516 | 1.2048 | | 0.074 | 6.7426 | 1834 | 1.5240 | 0.3167 | 1.5240 | 1.2345 | | 0.074 | 6.75 | 1836 | 1.5692 | 0.2966 | 1.5692 | 1.2527 | | 0.074 | 6.7574 | 1838 | 1.6395 | 0.2740 | 1.6395 | 1.2804 | | 0.074 | 6.7647 | 1840 | 1.6916 | 0.2776 | 1.6916 | 1.3006 | | 0.074 | 6.7721 | 1842 | 1.6819 | 0.2776 | 1.6819 | 1.2969 | | 0.074 | 6.7794 | 1844 | 1.6296 | 0.2740 | 1.6296 | 1.2766 | | 0.074 | 6.7868 | 1846 | 1.5546 | 0.2966 | 1.5546 | 1.2468 | | 0.074 | 6.7941 | 1848 | 1.4773 | 0.3272 | 1.4773 | 1.2155 | | 0.074 | 6.8015 | 1850 | 1.4031 | 0.3384 | 1.4031 | 1.1845 | | 0.074 | 6.8088 | 1852 | 1.3560 | 0.3589 | 1.3560 | 1.1645 | | 0.074 | 6.8162 | 1854 | 1.3473 | 0.3589 | 1.3473 | 1.1607 | | 0.074 | 6.8235 | 1856 | 1.3674 | 0.3589 | 1.3674 | 1.1693 | | 0.074 | 6.8309 | 1858 | 1.4171 | 0.3032 | 1.4171 | 1.1904 | | 0.074 | 6.8382 | 1860 | 1.4604 | 0.3101 | 1.4604 | 1.2085 | | 0.074 | 6.8456 | 1862 | 1.5203 | 0.3272 | 1.5203 | 1.2330 | | 0.074 | 6.8529 | 1864 | 1.5418 | 0.3167 | 1.5418 | 1.2417 | | 0.074 | 6.8603 | 1866 | 1.5713 | 0.2966 | 1.5713 | 1.2535 | | 0.074 | 6.8676 | 1868 | 1.5879 | 0.2740 | 1.5879 | 1.2601 | | 0.074 | 6.875 | 1870 | 1.5414 | 0.3065 | 1.5414 | 1.2415 | | 0.074 | 6.8824 | 1872 | 1.5134 | 0.3167 | 1.5134 | 1.2302 | | 0.074 | 6.8897 | 1874 | 1.5063 | 0.2997 | 1.5063 | 1.2273 | | 0.074 | 6.8971 | 1876 | 1.5055 | 0.3334 | 1.5055 | 1.2270 | | 0.074 | 6.9044 | 1878 | 1.5411 | 0.3392 | 1.5411 | 1.2414 | | 0.074 | 6.9118 | 1880 | 1.5565 | 0.3057 | 1.5565 | 1.2476 | | 0.074 | 6.9191 | 1882 | 1.5372 | 0.3499 | 1.5372 | 1.2398 | | 0.074 | 6.9265 | 1884 | 1.5497 | 0.3392 | 1.5497 | 1.2449 | | 0.074 | 6.9338 | 1886 | 1.5970 | 0.3057 | 1.5970 | 1.2637 | | 0.074 | 6.9412 | 1888 | 1.6082 | 0.2740 | 1.6082 | 1.2681 | | 0.074 | 6.9485 | 1890 | 1.6467 | 0.2740 | 1.6467 | 1.2832 | | 0.074 | 6.9559 | 1892 | 1.6860 | 0.2740 | 1.6860 | 1.2984 | | 0.074 | 6.9632 | 1894 | 1.6600 | 0.2740 | 1.6600 | 1.2884 | | 0.074 | 6.9706 | 1896 | 1.6113 | 0.2896 | 1.6113 | 1.2694 | | 0.074 | 6.9779 | 1898 | 1.5795 | 0.2896 | 1.5795 | 1.2568 | | 0.074 | 6.9853 | 1900 | 1.5313 | 0.3334 | 1.5313 | 1.2375 | | 0.074 | 6.9926 | 1902 | 1.4850 | 0.3443 | 1.4850 | 1.2186 | | 0.074 | 7.0 | 1904 | 1.4931 | 0.3443 | 1.4931 | 1.2219 | | 0.074 | 7.0074 | 1906 | 1.4855 | 0.3443 | 1.4855 | 1.2188 | | 0.074 | 7.0147 | 1908 | 1.4713 | 0.3414 | 1.4713 | 1.2130 | | 0.074 | 7.0221 | 1910 | 1.4491 | 0.3499 | 1.4491 | 1.2038 | | 0.074 | 7.0294 | 1912 | 1.4093 | 0.3469 | 1.4093 | 1.1871 | | 0.074 | 7.0368 | 1914 | 1.3710 | 0.3439 | 1.3710 | 1.1709 | | 0.074 | 7.0441 | 1916 | 1.3520 | 0.3439 | 1.3520 | 1.1627 | | 0.074 | 7.0515 | 1918 | 1.3420 | 0.3658 | 1.3420 | 1.1584 | | 0.074 | 7.0588 | 1920 | 1.3297 | 0.3658 | 1.3297 | 1.1531 | | 0.074 | 7.0662 | 1922 | 1.3308 | 0.3658 | 1.3308 | 1.1536 | | 0.074 | 7.0735 | 1924 | 1.3718 | 0.3560 | 1.3718 | 1.1712 | | 0.074 | 7.0809 | 1926 | 1.4204 | 0.3106 | 1.4204 | 1.1918 | | 0.074 | 7.0882 | 1928 | 1.4714 | 0.3067 | 1.4714 | 1.2130 | | 0.074 | 7.0956 | 1930 | 1.5069 | 0.3101 | 1.5069 | 1.2276 | | 0.074 | 7.1029 | 1932 | 1.5165 | 0.3101 | 1.5165 | 1.2315 | | 0.074 | 7.1103 | 1934 | 1.5024 | 0.3101 | 1.5024 | 1.2257 | | 0.074 | 7.1176 | 1936 | 1.5028 | 0.3443 | 1.5028 | 1.2259 | | 0.074 | 7.125 | 1938 | 1.4885 | 0.3443 | 1.4885 | 1.2201 | | 0.074 | 7.1324 | 1940 | 1.4752 | 0.3443 | 1.4752 | 1.2146 | | 0.074 | 7.1397 | 1942 | 1.4765 | 0.3443 | 1.4765 | 1.2151 | | 0.074 | 7.1471 | 1944 | 1.4817 | 0.3228 | 1.4817 | 1.2172 | | 0.074 | 7.1544 | 1946 | 1.4991 | 0.3228 | 1.4991 | 1.2244 | | 0.074 | 7.1618 | 1948 | 1.5269 | 0.3228 | 1.5269 | 1.2357 | | 0.074 | 7.1691 | 1950 | 1.5607 | 0.3392 | 1.5607 | 1.2493 | | 0.074 | 7.1765 | 1952 | 1.5804 | 0.3288 | 1.5804 | 1.2571 | | 0.074 | 7.1838 | 1954 | 1.6108 | 0.3057 | 1.6108 | 1.2692 | | 0.074 | 7.1912 | 1956 | 1.6184 | 0.3057 | 1.6184 | 1.2722 | | 0.074 | 7.1985 | 1958 | 1.6237 | 0.3057 | 1.6237 | 1.2743 | | 0.074 | 7.2059 | 1960 | 1.6280 | 0.3057 | 1.6280 | 1.2759 | | 0.074 | 7.2132 | 1962 | 1.6057 | 0.3057 | 1.6057 | 1.2672 | | 0.074 | 7.2206 | 1964 | 1.5578 | 0.3057 | 1.5578 | 1.2481 | | 0.074 | 7.2279 | 1966 | 1.5350 | 0.3057 | 1.5350 | 1.2390 | | 0.074 | 7.2353 | 1968 | 1.5825 | 0.3057 | 1.5825 | 1.2580 | | 0.074 | 7.2426 | 1970 | 1.6547 | 0.2961 | 1.6547 | 1.2863 | | 0.074 | 7.25 | 1972 | 1.6767 | 0.2961 | 1.6767 | 1.2949 | | 0.074 | 7.2574 | 1974 | 1.6708 | 0.2961 | 1.6708 | 1.2926 | | 0.074 | 7.2647 | 1976 | 1.6494 | 0.2961 | 1.6494 | 1.2843 | | 0.074 | 7.2721 | 1978 | 1.6450 | 0.2961 | 1.6450 | 1.2826 | | 0.074 | 7.2794 | 1980 | 1.6526 | 0.2961 | 1.6526 | 1.2855 | | 0.074 | 7.2868 | 1982 | 1.6910 | 0.2993 | 1.6910 | 1.3004 | | 0.074 | 7.2941 | 1984 | 1.6657 | 0.2993 | 1.6657 | 1.2906 | | 0.074 | 7.3015 | 1986 | 1.5950 | 0.3057 | 1.5950 | 1.2630 | | 0.074 | 7.3088 | 1988 | 1.5343 | 0.3057 | 1.5343 | 1.2387 | | 0.074 | 7.3162 | 1990 | 1.4496 | 0.3196 | 1.4496 | 1.2040 | | 0.074 | 7.3235 | 1992 | 1.3775 | 0.3499 | 1.3775 | 1.1737 | | 0.074 | 7.3309 | 1994 | 1.3343 | 0.3618 | 1.3343 | 1.1551 | | 0.074 | 7.3382 | 1996 | 1.2923 | 0.3658 | 1.2923 | 1.1368 | | 0.074 | 7.3456 | 1998 | 1.2782 | 0.3658 | 1.2782 | 1.1306 | | 0.0604 | 7.3529 | 2000 | 1.3055 | 0.3560 | 1.3055 | 1.1426 | | 0.0604 | 7.3603 | 2002 | 1.3709 | 0.3499 | 1.3709 | 1.1709 | | 0.0604 | 7.3676 | 2004 | 1.4677 | 0.3334 | 1.4677 | 1.2115 | | 0.0604 | 7.375 | 2006 | 1.5780 | 0.3057 | 1.5780 | 1.2562 | | 0.0604 | 7.3824 | 2008 | 1.6846 | 0.2685 | 1.6846 | 1.2979 | | 0.0604 | 7.3897 | 2010 | 1.7533 | 0.2596 | 1.7533 | 1.3241 | | 0.0604 | 7.3971 | 2012 | 1.8217 | 0.2668 | 1.8217 | 1.3497 | | 0.0604 | 7.4044 | 2014 | 1.8483 | 0.2583 | 1.8483 | 1.3595 | | 0.0604 | 7.4118 | 2016 | 1.8219 | 0.2633 | 1.8219 | 1.3498 | | 0.0604 | 7.4191 | 2018 | 1.7635 | 0.2685 | 1.7635 | 1.3280 | | 0.0604 | 7.4265 | 2020 | 1.6662 | 0.2648 | 1.6662 | 1.2908 | | 0.0604 | 7.4338 | 2022 | 1.5718 | 0.3167 | 1.5718 | 1.2537 | | 0.0604 | 7.4412 | 2024 | 1.5204 | 0.3101 | 1.5204 | 1.2330 | | 0.0604 | 7.4485 | 2026 | 1.5113 | 0.3101 | 1.5113 | 1.2293 | | 0.0604 | 7.4559 | 2028 | 1.5081 | 0.3272 | 1.5081 | 1.2280 | | 0.0604 | 7.4632 | 2030 | 1.5339 | 0.3272 | 1.5339 | 1.2385 | | 0.0604 | 7.4706 | 2032 | 1.5407 | 0.3272 | 1.5407 | 1.2412 | | 0.0604 | 7.4779 | 2034 | 1.5603 | 0.3272 | 1.5603 | 1.2491 | | 0.0604 | 7.4853 | 2036 | 1.6168 | 0.2966 | 1.6168 | 1.2715 | | 0.0604 | 7.4926 | 2038 | 1.6649 | 0.2740 | 1.6649 | 1.2903 | | 0.0604 | 7.5 | 2040 | 1.7059 | 0.2685 | 1.7059 | 1.3061 | | 0.0604 | 7.5074 | 2042 | 1.7416 | 0.2685 | 1.7416 | 1.3197 | | 0.0604 | 7.5147 | 2044 | 1.7187 | 0.2685 | 1.7187 | 1.3110 | | 0.0604 | 7.5221 | 2046 | 1.7257 | 0.2685 | 1.7257 | 1.3137 | | 0.0604 | 7.5294 | 2048 | 1.7238 | 0.2685 | 1.7238 | 1.3129 | | 0.0604 | 7.5368 | 2050 | 1.6816 | 0.2685 | 1.6816 | 1.2968 | | 0.0604 | 7.5441 | 2052 | 1.6679 | 0.2685 | 1.6679 | 1.2915 | | 0.0604 | 7.5515 | 2054 | 1.6249 | 0.2776 | 1.6249 | 1.2747 | | 0.0604 | 7.5588 | 2056 | 1.6006 | 0.2740 | 1.6006 | 1.2651 | | 0.0604 | 7.5662 | 2058 | 1.5626 | 0.2740 | 1.5626 | 1.2501 | | 0.0604 | 7.5735 | 2060 | 1.5662 | 0.2740 | 1.5662 | 1.2515 | | 0.0604 | 7.5809 | 2062 | 1.5548 | 0.2740 | 1.5548 | 1.2469 | | 0.0604 | 7.5882 | 2064 | 1.5314 | 0.2835 | 1.5314 | 1.2375 | | 0.0604 | 7.5956 | 2066 | 1.5525 | 0.2740 | 1.5525 | 1.2460 | | 0.0604 | 7.6029 | 2068 | 1.5997 | 0.2776 | 1.5997 | 1.2648 | | 0.0604 | 7.6103 | 2070 | 1.6333 | 0.2776 | 1.6333 | 1.2780 | | 0.0604 | 7.6176 | 2072 | 1.6786 | 0.2685 | 1.6786 | 1.2956 | | 0.0604 | 7.625 | 2074 | 1.6936 | 0.2685 | 1.6936 | 1.3014 | | 0.0604 | 7.6324 | 2076 | 1.6673 | 0.2993 | 1.6673 | 1.2912 | | 0.0604 | 7.6397 | 2078 | 1.6332 | 0.3089 | 1.6332 | 1.2780 | | 0.0604 | 7.6471 | 2080 | 1.6455 | 0.3089 | 1.6455 | 1.2828 | | 0.0604 | 7.6544 | 2082 | 1.6710 | 0.2993 | 1.6710 | 1.2927 | | 0.0604 | 7.6618 | 2084 | 1.6424 | 0.3089 | 1.6424 | 1.2815 | | 0.0604 | 7.6691 | 2086 | 1.6085 | 0.3089 | 1.6085 | 1.2683 | | 0.0604 | 7.6765 | 2088 | 1.5836 | 0.3057 | 1.5836 | 1.2584 | | 0.0604 | 7.6838 | 2090 | 1.5735 | 0.3057 | 1.5735 | 1.2544 | | 0.0604 | 7.6912 | 2092 | 1.5733 | 0.3057 | 1.5733 | 1.2543 | | 0.0604 | 7.6985 | 2094 | 1.5754 | 0.2966 | 1.5754 | 1.2552 | | 0.0604 | 7.7059 | 2096 | 1.5663 | 0.2966 | 1.5663 | 1.2515 | | 0.0604 | 7.7132 | 2098 | 1.5998 | 0.2966 | 1.5998 | 1.2648 | | 0.0604 | 7.7206 | 2100 | 1.6647 | 0.2740 | 1.6647 | 1.2902 | | 0.0604 | 7.7279 | 2102 | 1.7366 | 0.2685 | 1.7366 | 1.3178 | | 0.0604 | 7.7353 | 2104 | 1.7839 | 0.2596 | 1.7839 | 1.3356 | | 0.0604 | 7.7426 | 2106 | 1.8001 | 0.2596 | 1.8001 | 1.3417 | | 0.0604 | 7.75 | 2108 | 1.7662 | 0.2685 | 1.7662 | 1.3290 | | 0.0604 | 7.7574 | 2110 | 1.7073 | 0.2776 | 1.7073 | 1.3066 | | 0.0604 | 7.7647 | 2112 | 1.6403 | 0.2966 | 1.6403 | 1.2807 | | 0.0604 | 7.7721 | 2114 | 1.5939 | 0.2966 | 1.5939 | 1.2625 | | 0.0604 | 7.7794 | 2116 | 1.5535 | 0.2896 | 1.5535 | 1.2464 | | 0.0604 | 7.7868 | 2118 | 1.5032 | 0.3101 | 1.5032 | 1.2261 | | 0.0604 | 7.7941 | 2120 | 1.4847 | 0.3101 | 1.4847 | 1.2185 | | 0.0604 | 7.8015 | 2122 | 1.4627 | 0.3101 | 1.4627 | 1.2094 | | 0.0604 | 7.8088 | 2124 | 1.4642 | 0.3101 | 1.4642 | 1.2100 | | 0.0604 | 7.8162 | 2126 | 1.4821 | 0.3101 | 1.4821 | 1.2174 | | 0.0604 | 7.8235 | 2128 | 1.4967 | 0.2997 | 1.4967 | 1.2234 | | 0.0604 | 7.8309 | 2130 | 1.5208 | 0.2997 | 1.5208 | 1.2332 | | 0.0604 | 7.8382 | 2132 | 1.5222 | 0.2997 | 1.5222 | 1.2338 | | 0.0604 | 7.8456 | 2134 | 1.4970 | 0.2997 | 1.4970 | 1.2235 | | 0.0604 | 7.8529 | 2136 | 1.4762 | 0.3101 | 1.4762 | 1.2150 | | 0.0604 | 7.8603 | 2138 | 1.4690 | 0.3443 | 1.4690 | 1.2120 | | 0.0604 | 7.8676 | 2140 | 1.5004 | 0.3334 | 1.5004 | 1.2249 | | 0.0604 | 7.875 | 2142 | 1.5176 | 0.3334 | 1.5176 | 1.2319 | | 0.0604 | 7.8824 | 2144 | 1.5348 | 0.3334 | 1.5348 | 1.2389 | | 0.0604 | 7.8897 | 2146 | 1.5481 | 0.3392 | 1.5481 | 1.2442 | | 0.0604 | 7.8971 | 2148 | 1.5374 | 0.3499 | 1.5374 | 1.2399 | | 0.0604 | 7.9044 | 2150 | 1.5209 | 0.3499 | 1.5209 | 1.2333 | | 0.0604 | 7.9118 | 2152 | 1.5219 | 0.3499 | 1.5219 | 1.2336 | | 0.0604 | 7.9191 | 2154 | 1.5283 | 0.3392 | 1.5283 | 1.2362 | | 0.0604 | 7.9265 | 2156 | 1.5458 | 0.3392 | 1.5458 | 1.2433 | | 0.0604 | 7.9338 | 2158 | 1.5571 | 0.3392 | 1.5571 | 1.2478 | | 0.0604 | 7.9412 | 2160 | 1.5563 | 0.3392 | 1.5563 | 1.2475 | | 0.0604 | 7.9485 | 2162 | 1.5523 | 0.3392 | 1.5523 | 1.2459 | | 0.0604 | 7.9559 | 2164 | 1.5127 | 0.3392 | 1.5127 | 1.2299 | | 0.0604 | 7.9632 | 2166 | 1.4735 | 0.3499 | 1.4735 | 1.2139 | | 0.0604 | 7.9706 | 2168 | 1.4599 | 0.3499 | 1.4599 | 1.2083 | | 0.0604 | 7.9779 | 2170 | 1.4614 | 0.3499 | 1.4614 | 1.2089 | | 0.0604 | 7.9853 | 2172 | 1.4567 | 0.3499 | 1.4567 | 1.2070 | | 0.0604 | 7.9926 | 2174 | 1.4689 | 0.3499 | 1.4689 | 1.2120 | | 0.0604 | 8.0 | 2176 | 1.4879 | 0.3392 | 1.4879 | 1.2198 | | 0.0604 | 8.0074 | 2178 | 1.5105 | 0.3392 | 1.5105 | 1.2290 | | 0.0604 | 8.0147 | 2180 | 1.5242 | 0.3392 | 1.5242 | 1.2346 | | 0.0604 | 8.0221 | 2182 | 1.5628 | 0.2835 | 1.5628 | 1.2501 | | 0.0604 | 8.0294 | 2184 | 1.6047 | 0.2835 | 1.6047 | 1.2668 | | 0.0604 | 8.0368 | 2186 | 1.6597 | 0.2776 | 1.6597 | 1.2883 | | 0.0604 | 8.0441 | 2188 | 1.6849 | 0.2776 | 1.6849 | 1.2980 | | 0.0604 | 8.0515 | 2190 | 1.6644 | 0.2776 | 1.6644 | 1.2901 | | 0.0604 | 8.0588 | 2192 | 1.5959 | 0.3156 | 1.5959 | 1.2633 | | 0.0604 | 8.0662 | 2194 | 1.5069 | 0.3392 | 1.5069 | 1.2275 | | 0.0604 | 8.0735 | 2196 | 1.4337 | 0.3609 | 1.4337 | 1.1974 | | 0.0604 | 8.0809 | 2198 | 1.3816 | 0.3671 | 1.3816 | 1.1754 | | 0.0604 | 8.0882 | 2200 | 1.3614 | 0.3645 | 1.3614 | 1.1668 | | 0.0604 | 8.0956 | 2202 | 1.3552 | 0.3645 | 1.3552 | 1.1642 | | 0.0604 | 8.1029 | 2204 | 1.3921 | 0.3555 | 1.3921 | 1.1799 | | 0.0604 | 8.1103 | 2206 | 1.4504 | 0.3609 | 1.4504 | 1.2043 | | 0.0604 | 8.1176 | 2208 | 1.4872 | 0.3609 | 1.4872 | 1.2195 | | 0.0604 | 8.125 | 2210 | 1.5310 | 0.3392 | 1.5310 | 1.2374 | | 0.0604 | 8.1324 | 2212 | 1.5764 | 0.3392 | 1.5764 | 1.2555 | | 0.0604 | 8.1397 | 2214 | 1.5947 | 0.3392 | 1.5947 | 1.2628 | | 0.0604 | 8.1471 | 2216 | 1.6160 | 0.3065 | 1.6160 | 1.2712 | | 0.0604 | 8.1544 | 2218 | 1.6101 | 0.3065 | 1.6101 | 1.2689 | | 0.0604 | 8.1618 | 2220 | 1.5958 | 0.3065 | 1.5958 | 1.2633 | | 0.0604 | 8.1691 | 2222 | 1.6028 | 0.3065 | 1.6028 | 1.2660 | | 0.0604 | 8.1765 | 2224 | 1.5905 | 0.3065 | 1.5905 | 1.2612 | | 0.0604 | 8.1838 | 2226 | 1.5399 | 0.3167 | 1.5399 | 1.2409 | | 0.0604 | 8.1912 | 2228 | 1.4974 | 0.3167 | 1.4974 | 1.2237 | | 0.0604 | 8.1985 | 2230 | 1.4574 | 0.3272 | 1.4574 | 1.2072 | | 0.0604 | 8.2059 | 2232 | 1.4359 | 0.3380 | 1.4359 | 1.1983 | | 0.0604 | 8.2132 | 2234 | 1.4174 | 0.3175 | 1.4174 | 1.1905 | | 0.0604 | 8.2206 | 2236 | 1.4332 | 0.3380 | 1.4332 | 1.1972 | | 0.0604 | 8.2279 | 2238 | 1.4560 | 0.3380 | 1.4560 | 1.2067 | | 0.0604 | 8.2353 | 2240 | 1.4688 | 0.3272 | 1.4688 | 1.2119 | | 0.0604 | 8.2426 | 2242 | 1.4854 | 0.3272 | 1.4854 | 1.2188 | | 0.0604 | 8.25 | 2244 | 1.4797 | 0.3272 | 1.4797 | 1.2164 | | 0.0604 | 8.2574 | 2246 | 1.4851 | 0.3272 | 1.4851 | 1.2187 | | 0.0604 | 8.2647 | 2248 | 1.5024 | 0.3272 | 1.5024 | 1.2257 | | 0.0604 | 8.2721 | 2250 | 1.5074 | 0.3272 | 1.5074 | 1.2277 | | 0.0604 | 8.2794 | 2252 | 1.4951 | 0.3272 | 1.4951 | 1.2227 | | 0.0604 | 8.2868 | 2254 | 1.4869 | 0.3609 | 1.4869 | 1.2194 | | 0.0604 | 8.2941 | 2256 | 1.4658 | 0.3609 | 1.4658 | 1.2107 | | 0.0604 | 8.3015 | 2258 | 1.4555 | 0.3609 | 1.4555 | 1.2065 | | 0.0604 | 8.3088 | 2260 | 1.4444 | 0.3609 | 1.4444 | 1.2018 | | 0.0604 | 8.3162 | 2262 | 1.4512 | 0.3609 | 1.4512 | 1.2046 | | 0.0604 | 8.3235 | 2264 | 1.4783 | 0.3609 | 1.4783 | 1.2158 | | 0.0604 | 8.3309 | 2266 | 1.5062 | 0.3499 | 1.5062 | 1.2273 | | 0.0604 | 8.3382 | 2268 | 1.5152 | 0.3499 | 1.5152 | 1.2309 | | 0.0604 | 8.3456 | 2270 | 1.5119 | 0.3499 | 1.5119 | 1.2296 | | 0.0604 | 8.3529 | 2272 | 1.4994 | 0.3499 | 1.4994 | 1.2245 | | 0.0604 | 8.3603 | 2274 | 1.4854 | 0.3499 | 1.4854 | 1.2188 | | 0.0604 | 8.3676 | 2276 | 1.4681 | 0.3499 | 1.4681 | 1.2117 | | 0.0604 | 8.375 | 2278 | 1.4733 | 0.3499 | 1.4733 | 1.2138 | | 0.0604 | 8.3824 | 2280 | 1.4818 | 0.3499 | 1.4818 | 1.2173 | | 0.0604 | 8.3897 | 2282 | 1.4925 | 0.3499 | 1.4925 | 1.2217 | | 0.0604 | 8.3971 | 2284 | 1.4913 | 0.3499 | 1.4913 | 1.2212 | | 0.0604 | 8.4044 | 2286 | 1.4886 | 0.3499 | 1.4886 | 1.2201 | | 0.0604 | 8.4118 | 2288 | 1.4554 | 0.3609 | 1.4554 | 1.2064 | | 0.0604 | 8.4191 | 2290 | 1.3988 | 0.3384 | 1.3988 | 1.1827 | | 0.0604 | 8.4265 | 2292 | 1.3732 | 0.3353 | 1.3732 | 1.1718 | | 0.0604 | 8.4338 | 2294 | 1.3583 | 0.3469 | 1.3583 | 1.1655 | | 0.0604 | 8.4412 | 2296 | 1.3520 | 0.3589 | 1.3520 | 1.1628 | | 0.0604 | 8.4485 | 2298 | 1.3739 | 0.3353 | 1.3739 | 1.1721 | | 0.0604 | 8.4559 | 2300 | 1.4145 | 0.3384 | 1.4145 | 1.1893 | | 0.0604 | 8.4632 | 2302 | 1.4411 | 0.3067 | 1.4411 | 1.2005 | | 0.0604 | 8.4706 | 2304 | 1.4633 | 0.3067 | 1.4633 | 1.2097 | | 0.0604 | 8.4779 | 2306 | 1.4699 | 0.3101 | 1.4699 | 1.2124 | | 0.0604 | 8.4853 | 2308 | 1.4635 | 0.3101 | 1.4635 | 1.2098 | | 0.0604 | 8.4926 | 2310 | 1.4390 | 0.3414 | 1.4390 | 1.1996 | | 0.0604 | 8.5 | 2312 | 1.4171 | 0.3414 | 1.4171 | 1.1904 | | 0.0604 | 8.5074 | 2314 | 1.3875 | 0.3384 | 1.3875 | 1.1779 | | 0.0604 | 8.5147 | 2316 | 1.3631 | 0.3469 | 1.3631 | 1.1675 | | 0.0604 | 8.5221 | 2318 | 1.3471 | 0.3589 | 1.3471 | 1.1606 | | 0.0604 | 8.5294 | 2320 | 1.3488 | 0.3469 | 1.3488 | 1.1614 | | 0.0604 | 8.5368 | 2322 | 1.3651 | 0.3528 | 1.3651 | 1.1684 | | 0.0604 | 8.5441 | 2324 | 1.3970 | 0.3414 | 1.3970 | 1.1819 | | 0.0604 | 8.5515 | 2326 | 1.4608 | 0.3609 | 1.4608 | 1.2086 | | 0.0604 | 8.5588 | 2328 | 1.5263 | 0.3392 | 1.5263 | 1.2354 | | 0.0604 | 8.5662 | 2330 | 1.6090 | 0.2776 | 1.6090 | 1.2685 | | 0.0604 | 8.5735 | 2332 | 1.6773 | 0.2776 | 1.6773 | 1.2951 | | 0.0604 | 8.5809 | 2334 | 1.7366 | 0.2776 | 1.7366 | 1.3178 | | 0.0604 | 8.5882 | 2336 | 1.7681 | 0.2596 | 1.7681 | 1.3297 | | 0.0604 | 8.5956 | 2338 | 1.7892 | 0.2633 | 1.7892 | 1.3376 | | 0.0604 | 8.6029 | 2340 | 1.7782 | 0.2596 | 1.7782 | 1.3335 | | 0.0604 | 8.6103 | 2342 | 1.7419 | 0.2776 | 1.7419 | 1.3198 | | 0.0604 | 8.6176 | 2344 | 1.6945 | 0.2776 | 1.6945 | 1.3017 | | 0.0604 | 8.625 | 2346 | 1.6608 | 0.2776 | 1.6608 | 1.2887 | | 0.0604 | 8.6324 | 2348 | 1.6416 | 0.2740 | 1.6416 | 1.2812 | | 0.0604 | 8.6397 | 2350 | 1.6225 | 0.2740 | 1.6225 | 1.2738 | | 0.0604 | 8.6471 | 2352 | 1.6034 | 0.2740 | 1.6034 | 1.2663 | | 0.0604 | 8.6544 | 2354 | 1.5722 | 0.2835 | 1.5722 | 1.2539 | | 0.0604 | 8.6618 | 2356 | 1.5369 | 0.3392 | 1.5369 | 1.2397 | | 0.0604 | 8.6691 | 2358 | 1.5056 | 0.3499 | 1.5056 | 1.2270 | | 0.0604 | 8.6765 | 2360 | 1.4906 | 0.3499 | 1.4906 | 1.2209 | | 0.0604 | 8.6838 | 2362 | 1.4791 | 0.3609 | 1.4791 | 1.2162 | | 0.0604 | 8.6912 | 2364 | 1.4500 | 0.3582 | 1.4500 | 1.2041 | | 0.0604 | 8.6985 | 2366 | 1.4323 | 0.3414 | 1.4323 | 1.1968 | | 0.0604 | 8.7059 | 2368 | 1.4311 | 0.3414 | 1.4311 | 1.1963 | | 0.0604 | 8.7132 | 2370 | 1.4311 | 0.3414 | 1.4311 | 1.1963 | | 0.0604 | 8.7206 | 2372 | 1.4310 | 0.3414 | 1.4310 | 1.1963 | | 0.0604 | 8.7279 | 2374 | 1.4478 | 0.3067 | 1.4478 | 1.2032 | | 0.0604 | 8.7353 | 2376 | 1.4607 | 0.3241 | 1.4607 | 1.2086 | | 0.0604 | 8.7426 | 2378 | 1.4842 | 0.3241 | 1.4842 | 1.2183 | | 0.0604 | 8.75 | 2380 | 1.4906 | 0.3241 | 1.4906 | 1.2209 | | 0.0604 | 8.7574 | 2382 | 1.4976 | 0.3135 | 1.4976 | 1.2238 | | 0.0604 | 8.7647 | 2384 | 1.5228 | 0.3167 | 1.5228 | 1.2340 | | 0.0604 | 8.7721 | 2386 | 1.5382 | 0.3065 | 1.5382 | 1.2402 | | 0.0604 | 8.7794 | 2388 | 1.5598 | 0.3065 | 1.5598 | 1.2489 | | 0.0604 | 8.7868 | 2390 | 1.5824 | 0.3065 | 1.5824 | 1.2580 | | 0.0604 | 8.7941 | 2392 | 1.5947 | 0.3065 | 1.5947 | 1.2628 | | 0.0604 | 8.8015 | 2394 | 1.5942 | 0.3065 | 1.5942 | 1.2626 | | 0.0604 | 8.8088 | 2396 | 1.5913 | 0.3065 | 1.5913 | 1.2615 | | 0.0604 | 8.8162 | 2398 | 1.5969 | 0.2835 | 1.5969 | 1.2637 | | 0.0604 | 8.8235 | 2400 | 1.5984 | 0.2835 | 1.5984 | 1.2643 | | 0.0604 | 8.8309 | 2402 | 1.6077 | 0.2835 | 1.6077 | 1.2680 | | 0.0604 | 8.8382 | 2404 | 1.6233 | 0.2870 | 1.6233 | 1.2741 | | 0.0604 | 8.8456 | 2406 | 1.6168 | 0.2870 | 1.6168 | 1.2715 | | 0.0604 | 8.8529 | 2408 | 1.5957 | 0.2870 | 1.5957 | 1.2632 | | 0.0604 | 8.8603 | 2410 | 1.5633 | 0.2835 | 1.5633 | 1.2503 | | 0.0604 | 8.8676 | 2412 | 1.5262 | 0.2835 | 1.5262 | 1.2354 | | 0.0604 | 8.875 | 2414 | 1.5043 | 0.3065 | 1.5043 | 1.2265 | | 0.0604 | 8.8824 | 2416 | 1.4956 | 0.3135 | 1.4956 | 1.2230 | | 0.0604 | 8.8897 | 2418 | 1.5080 | 0.3065 | 1.5080 | 1.2280 | | 0.0604 | 8.8971 | 2420 | 1.5262 | 0.2835 | 1.5262 | 1.2354 | | 0.0604 | 8.9044 | 2422 | 1.5322 | 0.2835 | 1.5322 | 1.2378 | | 0.0604 | 8.9118 | 2424 | 1.5336 | 0.2835 | 1.5336 | 1.2384 | | 0.0604 | 8.9191 | 2426 | 1.5366 | 0.2835 | 1.5366 | 1.2396 | | 0.0604 | 8.9265 | 2428 | 1.5347 | 0.2835 | 1.5347 | 1.2388 | | 0.0604 | 8.9338 | 2430 | 1.5339 | 0.2835 | 1.5339 | 1.2385 | | 0.0604 | 8.9412 | 2432 | 1.5252 | 0.2835 | 1.5252 | 1.2350 | | 0.0604 | 8.9485 | 2434 | 1.5245 | 0.2835 | 1.5245 | 1.2347 | | 0.0604 | 8.9559 | 2436 | 1.5382 | 0.2835 | 1.5382 | 1.2402 | | 0.0604 | 8.9632 | 2438 | 1.5579 | 0.2835 | 1.5579 | 1.2482 | | 0.0604 | 8.9706 | 2440 | 1.5750 | 0.2835 | 1.5750 | 1.2550 | | 0.0604 | 8.9779 | 2442 | 1.5905 | 0.2835 | 1.5905 | 1.2612 | | 0.0604 | 8.9853 | 2444 | 1.6048 | 0.2835 | 1.6048 | 1.2668 | | 0.0604 | 8.9926 | 2446 | 1.6111 | 0.2835 | 1.6111 | 1.2693 | | 0.0604 | 9.0 | 2448 | 1.6105 | 0.2835 | 1.6105 | 1.2691 | | 0.0604 | 9.0074 | 2450 | 1.6001 | 0.2835 | 1.6001 | 1.2650 | | 0.0604 | 9.0147 | 2452 | 1.5754 | 0.2835 | 1.5754 | 1.2551 | | 0.0604 | 9.0221 | 2454 | 1.5509 | 0.2835 | 1.5509 | 1.2454 | | 0.0604 | 9.0294 | 2456 | 1.5361 | 0.2835 | 1.5361 | 1.2394 | | 0.0604 | 9.0368 | 2458 | 1.5176 | 0.3065 | 1.5176 | 1.2319 | | 0.0604 | 9.0441 | 2460 | 1.5062 | 0.3065 | 1.5062 | 1.2273 | | 0.0604 | 9.0515 | 2462 | 1.4959 | 0.3167 | 1.4959 | 1.2231 | | 0.0604 | 9.0588 | 2464 | 1.4777 | 0.3135 | 1.4777 | 1.2156 | | 0.0604 | 9.0662 | 2466 | 1.4612 | 0.3135 | 1.4612 | 1.2088 | | 0.0604 | 9.0735 | 2468 | 1.4445 | 0.3067 | 1.4445 | 1.2019 | | 0.0604 | 9.0809 | 2470 | 1.4322 | 0.3067 | 1.4322 | 1.1967 | | 0.0604 | 9.0882 | 2472 | 1.4309 | 0.3067 | 1.4309 | 1.1962 | | 0.0604 | 9.0956 | 2474 | 1.4369 | 0.3067 | 1.4369 | 1.1987 | | 0.0604 | 9.1029 | 2476 | 1.4480 | 0.3241 | 1.4480 | 1.2033 | | 0.0604 | 9.1103 | 2478 | 1.4503 | 0.3241 | 1.4503 | 1.2043 | | 0.0604 | 9.1176 | 2480 | 1.4570 | 0.3135 | 1.4570 | 1.2071 | | 0.0604 | 9.125 | 2482 | 1.4658 | 0.3135 | 1.4658 | 1.2107 | | 0.0604 | 9.1324 | 2484 | 1.4782 | 0.3135 | 1.4782 | 1.2158 | | 0.0604 | 9.1397 | 2486 | 1.5037 | 0.3032 | 1.5037 | 1.2263 | | 0.0604 | 9.1471 | 2488 | 1.5312 | 0.2835 | 1.5312 | 1.2374 | | 0.0604 | 9.1544 | 2490 | 1.5553 | 0.2835 | 1.5553 | 1.2471 | | 0.0604 | 9.1618 | 2492 | 1.5615 | 0.2835 | 1.5615 | 1.2496 | | 0.0604 | 9.1691 | 2494 | 1.5504 | 0.2835 | 1.5504 | 1.2451 | | 0.0604 | 9.1765 | 2496 | 1.5249 | 0.2835 | 1.5249 | 1.2349 | | 0.0604 | 9.1838 | 2498 | 1.5017 | 0.3135 | 1.5017 | 1.2254 | | 0.0527 | 9.1912 | 2500 | 1.4764 | 0.3135 | 1.4764 | 1.2151 | | 0.0527 | 9.1985 | 2502 | 1.4538 | 0.2962 | 1.4538 | 1.2057 | | 0.0527 | 9.2059 | 2504 | 1.4406 | 0.2962 | 1.4406 | 1.2002 | | 0.0527 | 9.2132 | 2506 | 1.4347 | 0.3067 | 1.4347 | 1.1978 | | 0.0527 | 9.2206 | 2508 | 1.4430 | 0.2962 | 1.4430 | 1.2012 | | 0.0527 | 9.2279 | 2510 | 1.4517 | 0.2962 | 1.4517 | 1.2049 | | 0.0527 | 9.2353 | 2512 | 1.4559 | 0.2962 | 1.4559 | 1.2066 | | 0.0527 | 9.2426 | 2514 | 1.4558 | 0.3135 | 1.4558 | 1.2066 | | 0.0527 | 9.25 | 2516 | 1.4623 | 0.3135 | 1.4623 | 1.2092 | | 0.0527 | 9.2574 | 2518 | 1.4715 | 0.3135 | 1.4715 | 1.2130 | | 0.0527 | 9.2647 | 2520 | 1.4822 | 0.3135 | 1.4822 | 1.2175 | | 0.0527 | 9.2721 | 2522 | 1.4923 | 0.3167 | 1.4923 | 1.2216 | | 0.0527 | 9.2794 | 2524 | 1.5014 | 0.3167 | 1.5014 | 1.2253 | | 0.0527 | 9.2868 | 2526 | 1.5105 | 0.3065 | 1.5105 | 1.2290 | | 0.0527 | 9.2941 | 2528 | 1.5147 | 0.3065 | 1.5147 | 1.2307 | | 0.0527 | 9.3015 | 2530 | 1.5163 | 0.3065 | 1.5163 | 1.2314 | | 0.0527 | 9.3088 | 2532 | 1.5252 | 0.2835 | 1.5252 | 1.2350 | | 0.0527 | 9.3162 | 2534 | 1.5433 | 0.2835 | 1.5433 | 1.2423 | | 0.0527 | 9.3235 | 2536 | 1.5677 | 0.2835 | 1.5677 | 1.2521 | | 0.0527 | 9.3309 | 2538 | 1.5830 | 0.2835 | 1.5830 | 1.2582 | | 0.0527 | 9.3382 | 2540 | 1.5881 | 0.2835 | 1.5881 | 1.2602 | | 0.0527 | 9.3456 | 2542 | 1.5915 | 0.2835 | 1.5915 | 1.2615 | | 0.0527 | 9.3529 | 2544 | 1.5968 | 0.2835 | 1.5968 | 1.2636 | | 0.0527 | 9.3603 | 2546 | 1.6002 | 0.2835 | 1.6002 | 1.2650 | | 0.0527 | 9.3676 | 2548 | 1.5996 | 0.2835 | 1.5996 | 1.2648 | | 0.0527 | 9.375 | 2550 | 1.5903 | 0.2835 | 1.5903 | 1.2611 | | 0.0527 | 9.3824 | 2552 | 1.5874 | 0.2835 | 1.5874 | 1.2599 | | 0.0527 | 9.3897 | 2554 | 1.5837 | 0.2835 | 1.5837 | 1.2585 | | 0.0527 | 9.3971 | 2556 | 1.5797 | 0.2835 | 1.5797 | 1.2569 | | 0.0527 | 9.4044 | 2558 | 1.5867 | 0.2835 | 1.5867 | 1.2596 | | 0.0527 | 9.4118 | 2560 | 1.5915 | 0.2835 | 1.5915 | 1.2616 | | 0.0527 | 9.4191 | 2562 | 1.5878 | 0.2835 | 1.5878 | 1.2601 | | 0.0527 | 9.4265 | 2564 | 1.5763 | 0.2835 | 1.5763 | 1.2555 | | 0.0527 | 9.4338 | 2566 | 1.5666 | 0.2835 | 1.5666 | 1.2517 | | 0.0527 | 9.4412 | 2568 | 1.5583 | 0.3065 | 1.5583 | 1.2483 | | 0.0527 | 9.4485 | 2570 | 1.5464 | 0.3065 | 1.5464 | 1.2435 | | 0.0527 | 9.4559 | 2572 | 1.5300 | 0.3065 | 1.5300 | 1.2369 | | 0.0527 | 9.4632 | 2574 | 1.5180 | 0.3167 | 1.5180 | 1.2321 | | 0.0527 | 9.4706 | 2576 | 1.5116 | 0.3167 | 1.5116 | 1.2295 | | 0.0527 | 9.4779 | 2578 | 1.5088 | 0.3167 | 1.5088 | 1.2283 | | 0.0527 | 9.4853 | 2580 | 1.5064 | 0.3167 | 1.5064 | 1.2274 | | 0.0527 | 9.4926 | 2582 | 1.5084 | 0.3167 | 1.5084 | 1.2282 | | 0.0527 | 9.5 | 2584 | 1.5162 | 0.3065 | 1.5162 | 1.2313 | | 0.0527 | 9.5074 | 2586 | 1.5181 | 0.3392 | 1.5181 | 1.2321 | | 0.0527 | 9.5147 | 2588 | 1.5164 | 0.3392 | 1.5164 | 1.2314 | | 0.0527 | 9.5221 | 2590 | 1.5132 | 0.3392 | 1.5132 | 1.2301 | | 0.0527 | 9.5294 | 2592 | 1.5107 | 0.3392 | 1.5107 | 1.2291 | | 0.0527 | 9.5368 | 2594 | 1.5084 | 0.3392 | 1.5084 | 1.2282 | | 0.0527 | 9.5441 | 2596 | 1.5095 | 0.3392 | 1.5095 | 1.2286 | | 0.0527 | 9.5515 | 2598 | 1.5066 | 0.3392 | 1.5066 | 1.2274 | | 0.0527 | 9.5588 | 2600 | 1.5131 | 0.3392 | 1.5131 | 1.2301 | | 0.0527 | 9.5662 | 2602 | 1.5215 | 0.3156 | 1.5215 | 1.2335 | | 0.0527 | 9.5735 | 2604 | 1.5272 | 0.3156 | 1.5272 | 1.2358 | | 0.0527 | 9.5809 | 2606 | 1.5317 | 0.2835 | 1.5317 | 1.2376 | | 0.0527 | 9.5882 | 2608 | 1.5346 | 0.2835 | 1.5346 | 1.2388 | | 0.0527 | 9.5956 | 2610 | 1.5439 | 0.2835 | 1.5439 | 1.2426 | | 0.0527 | 9.6029 | 2612 | 1.5535 | 0.2835 | 1.5535 | 1.2464 | | 0.0527 | 9.6103 | 2614 | 1.5678 | 0.2835 | 1.5678 | 1.2521 | | 0.0527 | 9.6176 | 2616 | 1.5839 | 0.2835 | 1.5839 | 1.2585 | | 0.0527 | 9.625 | 2618 | 1.5979 | 0.2740 | 1.5979 | 1.2641 | | 0.0527 | 9.6324 | 2620 | 1.6071 | 0.2740 | 1.6071 | 1.2677 | | 0.0527 | 9.6397 | 2622 | 1.6070 | 0.2740 | 1.6070 | 1.2677 | | 0.0527 | 9.6471 | 2624 | 1.6025 | 0.2740 | 1.6025 | 1.2659 | | 0.0527 | 9.6544 | 2626 | 1.6000 | 0.2740 | 1.6000 | 1.2649 | | 0.0527 | 9.6618 | 2628 | 1.5995 | 0.2740 | 1.5995 | 1.2647 | | 0.0527 | 9.6691 | 2630 | 1.6007 | 0.2740 | 1.6007 | 1.2652 | | 0.0527 | 9.6765 | 2632 | 1.6068 | 0.2740 | 1.6068 | 1.2676 | | 0.0527 | 9.6838 | 2634 | 1.6078 | 0.2740 | 1.6078 | 1.2680 | | 0.0527 | 9.6912 | 2636 | 1.6046 | 0.2740 | 1.6046 | 1.2667 | | 0.0527 | 9.6985 | 2638 | 1.6014 | 0.2740 | 1.6014 | 1.2655 | | 0.0527 | 9.7059 | 2640 | 1.5953 | 0.2835 | 1.5953 | 1.2631 | | 0.0527 | 9.7132 | 2642 | 1.5904 | 0.2835 | 1.5904 | 1.2611 | | 0.0527 | 9.7206 | 2644 | 1.5837 | 0.2835 | 1.5837 | 1.2585 | | 0.0527 | 9.7279 | 2646 | 1.5778 | 0.2835 | 1.5778 | 1.2561 | | 0.0527 | 9.7353 | 2648 | 1.5766 | 0.2835 | 1.5766 | 1.2556 | | 0.0527 | 9.7426 | 2650 | 1.5791 | 0.2835 | 1.5791 | 1.2566 | | 0.0527 | 9.75 | 2652 | 1.5846 | 0.2835 | 1.5846 | 1.2588 | | 0.0527 | 9.7574 | 2654 | 1.5927 | 0.2835 | 1.5927 | 1.2620 | | 0.0527 | 9.7647 | 2656 | 1.6002 | 0.2740 | 1.6002 | 1.2650 | | 0.0527 | 9.7721 | 2658 | 1.6031 | 0.2740 | 1.6031 | 1.2661 | | 0.0527 | 9.7794 | 2660 | 1.6047 | 0.2740 | 1.6047 | 1.2668 | | 0.0527 | 9.7868 | 2662 | 1.6056 | 0.2740 | 1.6056 | 1.2671 | | 0.0527 | 9.7941 | 2664 | 1.6037 | 0.2740 | 1.6037 | 1.2664 | | 0.0527 | 9.8015 | 2666 | 1.6000 | 0.2740 | 1.6000 | 1.2649 | | 0.0527 | 9.8088 | 2668 | 1.5951 | 0.2740 | 1.5951 | 1.2630 | | 0.0527 | 9.8162 | 2670 | 1.5927 | 0.2835 | 1.5927 | 1.2620 | | 0.0527 | 9.8235 | 2672 | 1.5903 | 0.2835 | 1.5903 | 1.2611 | | 0.0527 | 9.8309 | 2674 | 1.5914 | 0.2835 | 1.5914 | 1.2615 | | 0.0527 | 9.8382 | 2676 | 1.5927 | 0.2835 | 1.5927 | 1.2620 | | 0.0527 | 9.8456 | 2678 | 1.5973 | 0.2740 | 1.5973 | 1.2638 | | 0.0527 | 9.8529 | 2680 | 1.6028 | 0.2740 | 1.6028 | 1.2660 | | 0.0527 | 9.8603 | 2682 | 1.6086 | 0.2740 | 1.6086 | 1.2683 | | 0.0527 | 9.8676 | 2684 | 1.6130 | 0.2740 | 1.6130 | 1.2701 | | 0.0527 | 9.875 | 2686 | 1.6152 | 0.2740 | 1.6152 | 1.2709 | | 0.0527 | 9.8824 | 2688 | 1.6166 | 0.2740 | 1.6166 | 1.2715 | | 0.0527 | 9.8897 | 2690 | 1.6182 | 0.2740 | 1.6182 | 1.2721 | | 0.0527 | 9.8971 | 2692 | 1.6179 | 0.2740 | 1.6179 | 1.2719 | | 0.0527 | 9.9044 | 2694 | 1.6167 | 0.2740 | 1.6167 | 1.2715 | | 0.0527 | 9.9118 | 2696 | 1.6161 | 0.2740 | 1.6161 | 1.2713 | | 0.0527 | 9.9191 | 2698 | 1.6159 | 0.2740 | 1.6159 | 1.2712 | | 0.0527 | 9.9265 | 2700 | 1.6155 | 0.2740 | 1.6155 | 1.2710 | | 0.0527 | 9.9338 | 2702 | 1.6153 | 0.2740 | 1.6153 | 1.2710 | | 0.0527 | 9.9412 | 2704 | 1.6158 | 0.2740 | 1.6158 | 1.2712 | | 0.0527 | 9.9485 | 2706 | 1.6165 | 0.2740 | 1.6165 | 1.2714 | | 0.0527 | 9.9559 | 2708 | 1.6170 | 0.2740 | 1.6170 | 1.2716 | | 0.0527 | 9.9632 | 2710 | 1.6182 | 0.2740 | 1.6182 | 1.2721 | | 0.0527 | 9.9706 | 2712 | 1.6192 | 0.2740 | 1.6192 | 1.2725 | | 0.0527 | 9.9779 | 2714 | 1.6196 | 0.2740 | 1.6196 | 1.2726 | | 0.0527 | 9.9853 | 2716 | 1.6200 | 0.2740 | 1.6200 | 1.2728 | | 0.0527 | 9.9926 | 2718 | 1.6201 | 0.2740 | 1.6201 | 1.2728 | | 0.0527 | 10.0 | 2720 | 1.6201 | 0.2740 | 1.6201 | 1.2728 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
PEGurevich/detr-finetuned-balloon-v2-resized-flip_and_rot-scheduled
PEGurevich
2024-11-26T13:31:00Z
192
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-11-26T13:20:59Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
outlookAi/rbeUvXxEui
outlookAi
2024-11-26T13:26:53Z
23
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
2024-11-26T12:53:32Z
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: thaivintagedress,detaildress --- # Rbeuvxxeui <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `thaivintagedress,detaildress` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('outlookAi/rbeUvXxEui', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
Doncle/Toobo_GGUF_V2
Doncle
2024-11-26T13:26:10Z
6
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "dataset:Doncle/tooboo", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T11:11:40Z
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en datasets: - Doncle/tooboo --- # Uploaded model - **Developed by:** Doncle - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit Bonjour je suis tooboo le bonobo 🐒, je suis là pour t'aider sur la météo d'aujourd'hui🌈. N'hesite pas à me demander le temps qu'il fera chez toi ✨🐵 <img src="https://pbs.twimg.com/media/FWI3mvJWQAEORTY.jpg" width="200"/> This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
CATIE-AQ/NERmembert-base-3entities
CATIE-AQ
2024-11-26T13:23:56Z
38,266
2
transformers
[ "transformers", "tensorboard", "safetensors", "camembert", "token-classification", "fr", "dataset:CATIE-AQ/frenchNER_3entities", "arxiv:1910.09700", "base_model:almanach/camembert-base", "base_model:finetune:almanach/camembert-base", "doi:10.57967/hf/1750", "license:mit", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2023-12-15T13:55:25Z
--- license: mit base_model: camembert-base metrics: - precision - recall - f1 - accuracy model-index: - name: NERmembert-base-3entities results: [] datasets: - CATIE-AQ/frenchNER_3entities language: - fr widget: - text: >- Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques. library_name: transformers pipeline_tag: token-classification co2_eq_emissions: 35 new_version: CATIE-AQ/NERmemberta-3entities --- # NERmembert-base-3entities ## Model Description We present **NERmembert-base-3entities**, which is a [CamemBERT base](https://huggingface.co/camembert-base) fine-tuned for the Name Entity Recognition task for the French language on five French NER datasets for 3 entities (LOC, PER, ORG). All these datasets were concatenated and cleaned into a single dataset that we called [frenchNER_3entities](https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities). This represents a total of over **420,264 rows, of which 346,071 are for training, 32,951 for validation and 41,242 for testing.** Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/NER_en/) or [French](https://blog.vaniila.ai/NER/). ## Dataset The dataset used is [frenchNER_3entities](https://huggingface.co/datasets/CATIE-AQ/frenchNER_3entities), which represents ~420k sentences labeled in 4 categories: | Label | Examples | |:------|:-----------------------------------------------------------| | PER | "La Bruyère", "Gaspard de Coligny", "Wittgenstein" | | ORG | "UTBM", "American Airlines", "id Software" | | LOC | "République du Cap-Vert", "Créteil", "Bordeaux" | The distribution of the entities is as follows: <table> <thead> <tr> <th><br>Splits</th> <th><br>O</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <td><br>train</td> <td><br>8,398,765</td> <td><br>327,393</td> <td><br>303,722</td> <td><br>151,490</td> </tr> <tr> <td><br>validation</td> <td><br>592,815</td> <td><br>34,127</td> <td><br>30,279</td> <td><br>18,743</td> </tr> <tr> <td><br>test</td> <td><br>773,871</td> <td><br>43,634</td> <td><br>39,195</td> <td><br>21,391</td> </tr> </tbody> </table> ## Evaluation results The evaluation was carried out using the [**evaluate**](https://pypi.org/project/evaluate/) python package. ### frenchNER_3entities For space reasons, we show only the F1 of the different models. You can see the full results below the table. <table> <thead> <tr> <th><br>Model</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <tr> <td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>0.941</td> <td><br>0.883</td> <td><br>0.658</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>0.942</td> <td><br>0.882</td> <td><br>0.647</td> </tr> <tr> <td rowspan="1"><br>NERmembert-base-3entities (this model)</td> <td><br>0.966</td> <td><br>0.940</td> <td><br>0.876</td> </tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br><b>0.969</b></td> <td><br><b>0.947</b></td> <td><br><b>0.890</b></td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>0.951</td> <td><br>0.894</td> <td><br>0.671</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>0.958</td> <td><br>0.901</td> <td><br>0.685</td> </tr> </tbody> </table> <details> <summary>Full results</summary> <table> <thead> <tr> <th><br>Model</th> <th><br>Metrics</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>O</th> <th><br>Overall</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>Precision</td> <td><br>0.918</td> <td><br>0.860</td> <td><br>0.831</td> <td><br>0.992</td> <td><br>0.974</td> </tr> <tr> <td><br>Recall</td> <td><br>0.964</td> <td><br>0.908</td> <td><br>0.544</td> <td><br>0.964</td> <td><br>0.948</td> </tr> <tr> <td>F1</td> <td><br>0.941</td> <td><br>0.883</td> <td><br>0.658</td> <td><br>0.978</td> <td><br>0.961</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>Precision</td> <td><br>0.929</td> <td><br>0.861</td> <td><br>0.813</td> <td><br>0.991</td> <td><br>0.974</td> </tr> <tr> <td><br>Recall</td> <td><br>0.956</td> <td><br>0.905</td> <td><br>0.956</td> <td><br>0.965</td> <td><br>0.948</td> </tr> <tr> <td>F1</td> <td><br>0.942</td> <td><br>0.882</td> <td><br>0.647</td> <td><br>0.978</td> <td><br>0.961</td> </tr> <tr> <td rowspan="3"><br>NERmembert-base-3entities (this model)</td> <td><br>Precision</td> <td><br>0.961</td> <td><br>0.935</td> <td><br>0.877</td> <td><br>0.995</td> <td><br>0.986</td> </tr> <tr> <td><br>Recall</td> <td><br>0.972</td> <td><br>0.946</td> <td><br>0.876</td> <td><br>0.994</td> <td><br>0.986</td> </tr> <tr> <td>F1</td> <td><br>0.966</td> <td><br>0.940</td> <td><br>0.876</td> <td><br>0.994</td> <td><br>0.986</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>Precision</td> <td><br>0.966</td> <td><br>0.944</td> <td><br>0.884</td> <td><br>0.996</td> <td><br>0.987</td> </tr> <tr> <td><br>Recall</td> <td><br>0.950</td> <td><br>0.972</td> <td><br>0.896</td> <td><br>0.994</td> <td><br>0.987</td> </tr> <tr> <td>F1</td> <td><br><b>0.969</b></td> <td><br><b>0.947</b></td> <td><br><b>0.890</b></td> <td><br><b>0.995</b></td> <td><br><b>0.987</b></td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>Precision</td> <td><br>0.946</td> <td><br>0.884</td> <td><br>0.859</td> <td><br>0.993</td> <td><br>0.971</td> </tr> <tr> <td><br>Recall</td> <td><br>0.955</td> <td><br>0.904</td> <td><br>0.550</td> <td><br>0.993</td> <td><br>0.971</td> </tr> <tr> <td>F1</td> <td><br>0.951</td> <td><br>0.894</td> <td><br>0.671</td> <td><br>0.988</td> <td><br>0.971</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>Precision</td> <td><br>0.955</td> <td><br>0.896</td> <td><br>0.866</td> <td><br>0.983</td> <td><br>0.974</td> </tr> <tr> <td><br>Recall</td> <td><br>0.960</td> <td><br>0.906</td> <td><br>0.567</td> <td><br>0.994</td> <td><br>0.974</td> </tr> <tr> <td>F1</td> <td><br>0.958</td> <td><br>0.901</td> <td><br>0.685</td> <td><br>0.988</td> <td><br>0.974</td> </tr> </tbody> </table> </details> In detail: ### multiconer For space reasons, we show only the F1 of the different models. You can see the full results below the table. <table> <thead> <tr> <th><br>Model</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <tr> <td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>0.940</td> <td><br>0.761</td> <td><br>0.723</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>0.921</td> <td><br>0.748</td> <td><br>0.694</td> </tr> <tr> <td rowspan="1"><br>NERmembert-base-3entities (this model)</td> <td><br>0.960</td> <td><br>0.887</td> <td><br>0.876</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br><b>0.965</b></td> <td><br><b>0.902</b></td> <td><br><b>0.896</b></td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>0.960</td> <td><br>0.890</td> <td><br>0.867</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>0.969</td> <td><br>0.919</td> <td><br>0.904</td> </tr> </tbody> </table> <details> <summary>Full results</summary> <table> <thead> <tr> <th><br>Model</th> <th><br>Metrics</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>O</th> <th><br>Overall</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>Precision</td> <td><br>0.908</td> <td><br>0.717</td> <td><br>0.753</td> <td><br>0.987</td> <td><br>0.947</td> </tr> <tr> <td><br>Recall</td> <td><br>0.975</td> <td><br>0.811</td> <td><br>0.696</td> <td><br>0.878</td> <td><br>0.880</td> </tr> <tr> <td>F1</td> <td><br>0.940</td> <td><br>0.761</td> <td><br>0.723</td> <td><br>0.929</td> <td><br>0.912</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>Precision</td> <td><br>0.885</td> <td><br>0.738</td> <td><br>0.737</td> <td><br>0.983</td> <td><br>0.943</td> </tr> <tr> <td><br>Recall</td> <td><br>0.960</td> <td><br>0.759</td> <td><br>0.655</td> <td><br>0.882</td> <td><br>0.877</td> </tr> <tr> <td>F1</td> <td><br>0.921</td> <td><br>0.748</td> <td><br>0.694</td> <td><br>0.930</td> <td><br>0.909</td> </tr> <tr> <td rowspan="3"><br>NERmembert-base-3entities (this model)</td> <td><br>Precision</td> <td><br>0.957</td> <td><br>0.894</td> <td><br>0.876</td> <td><br>0.986</td> <td><br>0.972</td> </tr> <tr> <td><br>Recall</td> <td><br>0.962</td> <td><br>0.880</td> <td><br>0.878</td> <td><br>0.985</td> <td><br>0.972</td> </tr> <tr> <td>F1</td> <td><br>0.960</td> <td><br>0.887</td> <td><br>0.876</td> <td><br>0.985</td> <td><br>0.972</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>Precision</td> <td><br>0.960</td> <td><br>0.903</td> <td><br>0.916</td> <td><br>0.987</td> <td><br>0.976</td> </tr> <tr> <td><br>Recall</td> <td><br>0.969</td> <td><br>0.900</td> <td><br>0.877</td> <td><br>0.987</td> <td><br>0.976</td> </tr> <tr> <td>F1</td> <td><br>0.965</td> <td><br>0.902</td> <td><br>0.896</td> <td><br>0.987</td> <td><br>0.976</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>Precision</td> <td><br>0.954</td> <td><br>0.893</td> <td><br>0.851</td> <td><br>0.988</td> <td><br>0.972</td> </tr> <tr> <td><br>Recall</td> <td><br>0.967</td> <td><br>0.887</td> <td><br>0.883</td> <td><br>0.984</td> <td><br>0.972</td> </tr> <tr> <td>F1</td> <td><br>0.960</td> <td><br>0.890</td> <td><br>0.867</td> <td><br>0.986</td> <td><br>0.972</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>Precision</td> <td><br>0.964</td> <td><br>0.922</td> <td><br>0.904</td> <td><br>0.990</td> <td><br>0.978</td> </tr> <tr> <td><br>Recall</td> <td><br>0.975</td> <td><br>0.917</td> <td><br>0.904</td> <td><br>0.988</td> <td><br>0.978</td> </tr> <tr> <td>F1</td> <td><br><b>0.969</b></td> <td><br><b>0.919</b></td> <td><br><b>0.904</b></td> <td><br><b>0.989</b></td> <td><br><b>0.978</b></td> </tr> </tbody> </table> </details> ### multinerd For space reasons, we show only the F1 of the different models. You can see the full results below the table. <table> <thead> <tr> <th><br>Model</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <tr> <td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>0.962</td> <td><br>0.934</td> <td><br>0.888</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>0.972</td> <td><br>0.938</td> <td><br>0.884</td> </tr> <tr> <td rowspan="1"><br>NERmembert-base-3entities (this model)</td> <td><br>0.985</td> <td><br>0.973</td> <td><br>0.938</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br><b>0.987</b></td> <td><br><b>0.979</b></td> <td><br><b>0.953</b></td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>0.985</td> <td><br>0.973</td> <td><br>0.938</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br><b>0.987</b></td> <td><br>0.976</td> <td><br>0.948</td> </tr> </tbody> </table> <details> <summary>Full results</summary> <table> <thead> <tr> <th><br>Model</th> <th><br>Metrics</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>O</th> <th><br>Overall</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>Precision</td> <td><br>0.931</td> <td><br>0.893</td> <td><br>0.827</td> <td><br>0.999</td> <td><br>0.988</td> </tr> <tr> <td><br>Recall</td> <td><br>0.994</td> <td><br>0.980</td> <td><br>0.959</td> <td><br>0.973</td> <td><br>0.974</td> </tr> <tr> <td>F1</td> <td><br>0.962</td> <td><br>0.934</td> <td><br>0.888</td> <td><br>0.986</td> <td><br>0.981</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>Precision</td> <td><br>0.954</td> <td><br>0.908</td> <td><br>0.817</td> <td><br>0.999</td> <td><br>0.990</td> </tr> <tr> <td><br>Recall</td> <td><br>0.991</td> <td><br>0.969</td> <td><br>0.963</td> <td><br>0.975</td> <td><br>0.975</td> </tr> <tr> <td>F1</td> <td><br>0.972</td> <td><br>0.938</td> <td><br>0.884</td> <td><br>0.987</td> <td><br>0.983</td> </tr> <tr> <td rowspan="3"><br>NERmembert-base-3entities (this model)</td> <td><br>Precision</td> <td><br>0.974</td> <td><br>0.965</td> <td><br>0.910</td> <td><br>0.999</td> <td><br>0.995</td> </tr> <tr> <td><br>Recall</td> <td><br>0.995</td> <td><br>0.981</td> <td><br>0.968</td> <td><br>0.996</td> <td><br>0.995</td> </tr> <tr> <td>F1</td> <td><br>0.985</td> <td><br>0.973</td> <td><br>0.938</td> <td><br>0.998</td> <td><br>0.995</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>Precision</td> <td><br>0.979</td> <td><br>0.970</td> <td><br>0.927</td> <td><br>0.999</td> <td><br>0.996</td> </tr> <tr> <td><br>Recall</td> <td><br>0.996</td> <td><br>0.987</td> <td><br>0.980</td> <td><br>0.997</td> <td><br>0.996</td> </tr> <tr> <td>F1</td> <td><br><b>0.987</b></td> <td><br><b>0.979</b></td> <td><br><b>0.953</b></td> <td><br><b>0.998</b></td> <td><br><b>0.996</b></td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>Precision</td> <td><br>0.976</td> <td><br>0.961</td> <td><br>0.910</td> <td><br>0.999</td> <td><br>0.995</td> </tr> <tr> <td><br>Recall</td> <td><br>0.994</td> <td><br>0.985</td> <td><br>0.967</td> <td><br>0.996</td> <td><br>0.995</td> </tr> <tr> <td>F1</td> <td><br>0.985</td> <td><br>0.973</td> <td><br>0.938</td> <td><br>0.998</td> <td><br>0.995</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>Precision</td> <td><br>0.979</td> <td><br>0.967</td> <td><br>0.922</td> <td><br>0.999</td> <td><br>0.996</td> </tr> <tr> <td><br>Recall</td> <td><br>0.996</td> <td><br>0.986</td> <td><br>0.974</td> <td><br>0.974</td> <td><br>0.996</td> </tr> <tr> <td>F1</td> <td><br><b>0.987</b></td> <td><br>0.976</td> <td><br>0.948</td> <td><br>0.998</td> <td><br>0.996</td> </tr> </tbody> </table> </details> ### wikiner For space reasons, we show only the F1 of the different models. You can see the full results below the table. <table> <thead> <tr> <th><br>Model</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <tr> <td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br><b>0.986</b></td> <td><br><b>0.966</b></td> <td><br><b>0.938</b></td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>0.983</td> <td><br>0.964</td> <td><br>0.925</td> </tr> <tr> <td rowspan="1"><br>NERmembert-base-3entities (this model)</td> <td><br>0.969</td> <td><br>0.945</td> <td><br>0.878</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>0.972</td> <td><br>0.950</td> <td><br>0.893</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>0.970</td> <td><br>0.945</td> <td><br>0.876</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>0.975</td> <td><br>0.953</td> <td><br>0.896</td> </tr> </tbody> </table> <details> <summary>Full results</summary> <table> <thead> <tr> <th><br>Model</th> <th><br>Metrics</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>O</th> <th><br>Overall</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>Precision</td> <td><br>0.986</td> <td><br>0.962</td> <td><br>0.925</td> <td><br>0.999</td> <td><br>0.994</td> </tr> <tr> <td><br>Recall</td> <td><br>0.987</td> <td><br>0.969</td> <td><br>0.951</td> <td><br>0.965</td> <td><br>0.967</td> </tr> <tr> <td>F1</td> <td><br><b>0.986</b></td> <td><br><b>0.966</b></td> <td><br><b>0.938</b></td> <td><br><b>0.982</b></td> <td><br><b>0.980</b></td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>Precision</td> <td><br>0.982</td> <td><br>0.951</td> <td><br>0.910</td> <td><br>0.998</td> <td><br>0.994</td> </tr> <tr> <td><br>Recall</td> <td><br>0.985</td> <td><br>0.963</td> <td><br>0.940</td> <td><br>0.966</td> <td><br>0.967</td> </tr> <tr> <td>F1</td> <td><br>0.983</td> <td><br>0.964</td> <td><br>0.925</td> <td><br>0.982</td> <td><br>0.80</td> </tr> <tr> <td rowspan="3"><br>NERmembert-base-3entities (this model)</td> <td><br>Precision</td> <td><br>0.971</td> <td><br>0.947</td> <td><br>0.866</td> <td><br>0.994</td> <td><br>0.989</td> </tr> <tr> <td><br>Recall</td> <td><br>0.969</td> <td><br>0.942</td> <td><br>0.891</td> <td><br>0.995</td> <td><br>0.989</td> </tr> <tr> <td>F1</td> <td><br>0.969</td> <td><br>0.945</td> <td><br>0.878</td> <td><br>0.995</td> <td><br>0.989</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>Precision</td> <td><br>0.973</td> <td><br>0.953</td> <td><br>0.873</td> <td><br>0.996</td> <td><br>0.990</td> </tr> <tr> <td><br>Recall</td> <td><br>0.990</td> <td><br>0.948</td> <td><br>0.913</td> <td><br>0.995</td> <td><br>0.990</td> </tr> <tr> <td>F1</td> <td><br>0.972</td> <td><br>0.950</td> <td><br>0.893</td> <td><br>0.996</td> <td><br>0.990</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>Precision</td> <td><br>0.970</td> <td><br>0.944</td> <td><br>0.872</td> <td><br>0.955</td> <td><br>0.988</td> </tr> <tr> <td><br>Recall</td> <td><br>0.989</td> <td><br>0.947</td> <td><br>0.880</td> <td><br>0.995</td> <td><br>0.988</td> </tr> <tr> <td>F1</td> <td><br>0.970</td> <td><br>0.945</td> <td><br>0.876</td> <td><br>0.995</td> <td><br>0.988</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>Precision</td> <td><br>0.975</td> <td><br>0.957</td> <td><br>0.872</td> <td><br>0.996</td> <td><br>0.991</td> </tr> <tr> <td><br>Recall</td> <td><br>0.975</td> <td><br>0.949</td> <td><br>0.922</td> <td><br>0.996</td> <td><br>0.991</td> </tr> <tr> <td>F1</td> <td><br>0.975</td> <td><br>0.953</td> <td><br>0.896</td> <td><br>0.996</td> <td><br>0.991</td> </tr> </tbody> </table> </details> ### wikiann For space reasons, we show only the F1 of the different models. You can see the full results below the table. <table> <thead> <tr> <th><br>Model</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> </tr> </thead> <tbody> <tr> <td rowspan="1"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>0.867</td> <td><br>0.722</td> <td><br>0.451</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>0.862</td> <td><br>0.722</td> <td><br>0.451</td> </tr> <tr> <td rowspan="1"><br>NERmembert-base-3entities (this model)</td> <td><br>0.947</td> <td><br>0.906</td> <td><br>0.886</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br><b>0.949</b></td> <td><br><b>0.912</b></td> <td><br><b>0.899</b></td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>0.888</td> <td><br>0.733</td> <td><br>0.496</td> </tr> <tr> <td rowspan="1"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>0.905</td> <td><br>0.741</td> <td><br>0.511</td> </tr> </tbody> </table> <details> <summary>Full results</summary> <table> <thead> <tr> <th><br>Model</th> <th><br>Metrics</th> <th><br>PER</th> <th><br>LOC</th> <th><br>ORG</th> <th><br>O</th> <th><br>Overall</th> </tr> </thead> <tbody> <tr> <td rowspan="3"><br><a href="https://hf.co/Jean-Baptiste/camembert-ner">Jean-Baptiste/camembert-ner</a></td> <td><br>Precision</td> <td><br>0.862</td> <td><br>0.700</td> <td><br>0.864</td> <td><br>0.867</td> <td><br>0.832</td> </tr> <tr> <td><br>Recall</td> <td><br>0.871</td> <td><br>0.746</td> <td><br>0.305</td> <td><br>0.950</td> <td><br>0.772</td> </tr> <tr> <td>F1</td> <td><br>0.867</td> <td><br>0.722</td> <td><br>0.451</td> <td><br>0.867</td> <td><br>0.801</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/cmarkea/distilcamembert-base-ner">cmarkea/distilcamembert-base-ner</a></td> <td><br>Precision</td> <td><br>0.862</td> <td><br>0.700</td> <td><br>0.864</td> <td><br>0.867</td> <td><br>0.832</td> </tr> <tr> <td><br>Recall</td> <td><br>0.871</td> <td><br>0.746</td> <td><br>0.305</td> <td><br>0.950</td> <td><br>0.772</td> </tr> <tr> <td>F1</td> <td><br>0.867</td> <td><br>0.722</td> <td><br>0.451</td> <td><br>0.907</td> <td><br>0.800</td> </tr> <tr> <td rowspan="3"><br>NERmembert-base-3entities (this model)</td> <td><br>Precision</td> <td><br>0.948</td> <td><br>0.900</td> <td><br>0.893</td> <td><br>0.979</td> <td><br>0.942</td> </tr> <tr> <td><br>Recall</td> <td><br>0.946</td> <td><br>0.911</td> <td><br>0.878</td> <td><br>0.982</td> <td><br>0.942</td> </tr> <tr> <td>F1</td> <td><br>0.947</td> <td><br>0.906</td> <td><br>0.886</td> <td><br>0.980</td> <td><br>0.942</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-3entities">NERmembert-large-3entities</a></td> <td><br>Precision</td> <td><br>0.958</td> <td><br>0.917</td> <td><br>0.897</td> <td><br>0.980</td> <td><br><b>0.948</b></td> </tr> <tr> <td><br>Recall</td> <td><br>0.940</td> <td><br>0.915</td> <td><br>0.901</td> <td><br>0.983</td> <td><br><b>0.948</b></td> </tr> <tr> <td>F1</td> <td><br><b>0.949</b></td> <td><br><b>0.912</b></td> <td><br><b>0.899</b></td> <td><br><b>0.983</b></td> <td><br><b>0.948</b></td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-base-4entities">NERmembert-base-4entities</a></td> <td><br>Precision</td> <td><br>0.895</td> <td><br>0.727</td> <td><br>0.903</td> <td><br>0.766</td> <td><br>0.794</td> </tr> <tr> <td><br>Recall</td> <td><br>0.881</td> <td><br>0.740</td> <td><br>0.342</td> <td><br>0.984</td> <td><br>0.794</td> </tr> <tr> <td>F1</td> <td><br>0.888</td> <td><br>0.733</td> <td><br>0.496</td> <td><br>0.861</td> <td><br>0.794</td> </tr> <tr> <td rowspan="3"><br><a href="https://hf.co/CATIE-AQ/NERmembert-large-4entities">NERmembert-large-4entities</a></td> <td><br>Precision</td> <td><br>0.922</td> <td><br>0.738</td> <td><br>0.923</td> <td><br>0.766</td> <td><br>0.802</td> </tr> <tr> <td><br>Recall</td> <td><br>0.888</td> <td><br>0.743</td> <td><br>0.353</td> <td><br>0.988</td> <td><br>0.802</td> </tr> <tr> <td>F1</td> <td><br>0.905</td> <td><br>0.741</td> <td><br>0.511</td> <td><br>0.863</td> <td><br>0.802</td> </tr> </tbody> </table> </details> ## Usage ### Code ```python from transformers import pipeline ner = pipeline('token-classification', model='CATIE-AQ/NERmembert-base-3entities', tokenizer='CATIE-AQ/NERmembert-base-3entities', aggregation_strategy="simple") result = ner( "Le dévoilement du logo officiel des JO s'est déroulé le 21 octobre 2019 au Grand Rex. Ce nouvel emblème et cette nouvelle typographie ont été conçus par le designer Sylvain Boyer avec les agences Royalties & Ecobranding. Rond, il rassemble trois symboles : une médaille d'or, la flamme olympique et Marianne, symbolisée par un visage de femme mais privée de son bonnet phrygien caractéristique. La typographie dessinée fait référence à l'Art déco, mouvement artistique des années 1920, décennie pendant laquelle ont eu lieu pour la dernière fois les Jeux olympiques à Paris en 1924. Pour la première fois, ce logo sera unique pour les Jeux olympiques et les Jeux paralympiques." ) print(result) ``` ```python [{'entity_group': 'LOC', 'score': 0.9463236, 'word': 'Grand Rex', 'start': 75, 'end': 84}, {'entity_group': 'PER', 'score': 0.9865267, 'word': 'Sylvain Boyer', 'start': 165, 'end': 178}, {'entity_group': 'ORG', 'score': 0.8532809, 'word': 'Royalties', 'start': 196, 'end': 205}, {'entity_group': 'ORG', 'score': 0.9034991, 'word': 'Ecobranding', 'start': 208, 'end': 219}, {'entity_group': 'PER', 'score': 0.56342626, 'word': 'Marianne', 'start': 299, 'end': 307}, {'entity_group': 'LOC', 'score': 0.5433658, 'word': 'Paris', 'start': 568, 'end': 573}] ``` ### Try it through Space A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/NERmembert). ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0294 | 1.0 | 43650 | 0.0903 | 0.9202 | 0.9427 | 0.9313 | 0.9835 | | 0.0202 | 2.0 | 87300 | 0.0852 | 0.9257 | 0.9514 | 0.9383 | 0.9854 | | 0.0122 | 3.0 | 130950 | 0.0876 | 0.9292 | 0.9534 | 0.9411 | 0.9858 | ### Framework versions - Transformers 4.36.0 - Pytorch 2.1.1 - Datasets 2.14.7 - Tokenizers 0.15.0 ## Environmental Impact *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - **Hardware Type:** A100 PCIe 40/80GB - **Hours used:** 1h45min - **Cloud Provider:** Private Infrastructure - **Carbon Efficiency (kg/kWh):** 0.079 (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) for the day of December 15, 2023.) - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: 0.035 kg eq. CO2 ## Citations ### NERembert-base-3entities ``` @misc {NERmembert2024, author = { {BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { NERmembert-base-3entities }, year = 2024, url = { https://huggingface.co/CATIE-AQ/NERmembert-base-3entities }, doi = { 10.57967/hf/1752 }, publisher = { Hugging Face } } ``` ### multiconer ``` @inproceedings{multiconer2-report, title={{SemEval-2023 Task 2: Fine-grained Multilingual Named Entity Recognition (MultiCoNER 2)}}, author={Fetahu, Besnik and Kar, Sudipta and Chen, Zhiyu and Rokhlenko, Oleg and Malmasi, Shervin}, booktitle={Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)}, year={2023}, publisher={Association for Computational Linguistics}} @article{multiconer2-data, title={{MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition}}, author={Fetahu, Besnik and Chen, Zhiyu and Kar, Sudipta and Rokhlenko, Oleg and Malmasi, Shervin}, year={2023}} ``` ### multinerd ``` @inproceedings{tedeschi-navigli-2022-multinerd, title = "{M}ulti{NERD}: A Multilingual, Multi-Genre and Fine-Grained Dataset for Named Entity Recognition (and Disambiguation)", author = "Tedeschi, Simone and Navigli, Roberto", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-naacl.60", doi = "10.18653/v1/2022.findings-naacl.60", pages = "801--812"} ``` ### pii-masking-200k ``` @misc {ai4privacy_2023, author = { {ai4Privacy} }, title = { pii-masking-200k (Revision 1d4c0a1) }, year = 2023, url = { https://huggingface.co/datasets/ai4privacy/pii-masking-200k }, doi = { 10.57967/hf/1532 }, publisher = { Hugging Face }} ``` ### wikiann ``` @inproceedings{rahimi-etal-2019-massively, title = "Massively Multilingual Transfer for {NER}", author = "Rahimi, Afshin and Li, Yuan and Cohn, Trevor", booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2019", address = "Florence, Italy", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/P19-1015", pages = "151--164"} ``` ### wikiner ``` @article{NOTHMAN2013151, title = {Learning multilingual named entity recognition from Wikipedia}, journal = {Artificial Intelligence}, volume = {194}, pages = {151-175}, year = {2013}, note = {Artificial Intelligence, Wikipedia and Semi-Structured Resources}, issn = {0004-3702}, doi = {https://doi.org/10.1016/j.artint.2012.03.006}, url = {https://www.sciencedirect.com/science/article/pii/S0004370212000276}, author = {Joel Nothman and Nicky Ringland and Will Radford and Tara Murphy and James R. Curran}} ``` ### frenchNER_3entities ``` @misc {frenchNER2024, author = { {BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { frenchNER_3entities }, year = 2024, url = { https://huggingface.co/CATIE-AQ/frenchNER_3entities }, doi = { 10.57967/hf/1751 }, publisher = { Hugging Face } } ``` ### CamemBERT ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020}} ``` ## License MIT
samuelashraff/whisper-tiny-en-atc-thesis-2-no-lora
samuelashraff
2024-11-26T13:21:25Z
117
0
transformers
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-26T08:28:21Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-tiny tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-tiny-en-atc-thesis-2-no-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-en-atc-thesis-2-no-lora This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7688 - Wer: 44.8980 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 1 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 15000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.4276 | 250.0 | 500 | 0.2941 | 22.4490 | | 0.0001 | 500.0 | 1000 | 0.4339 | 24.4898 | | 0.0 | 750.0 | 1500 | 0.5702 | 24.4898 | | 0.0 | 1000.0 | 2000 | 0.7041 | 28.5714 | | 0.0 | 1250.0 | 2500 | 0.7371 | 28.5714 | | 0.0 | 1500.0 | 3000 | 0.8153 | 32.6531 | | 0.0 | 1750.0 | 3500 | 0.8885 | 26.5306 | | 0.0 | 2000.0 | 4000 | 0.9523 | 24.4898 | | 0.0 | 2250.0 | 4500 | 0.9644 | 38.7755 | | 0.0 | 2500.0 | 5000 | 1.0169 | 32.6531 | | 0.0 | 2750.0 | 5500 | 1.0098 | 34.6939 | | 0.0 | 3000.0 | 6000 | 1.0696 | 32.6531 | | 0.0435 | 3250.0 | 6500 | 0.6549 | 26.5306 | | 0.0 | 3500.0 | 7000 | 0.8819 | 28.5714 | | 0.0 | 3750.0 | 7500 | 1.0423 | 30.6122 | | 0.0 | 4000.0 | 8000 | 1.2150 | 32.6531 | | 0.0 | 4250.0 | 8500 | 1.3003 | 32.6531 | | 0.0 | 4500.0 | 9000 | 1.4076 | 36.7347 | | 0.0 | 4750.0 | 9500 | 1.5208 | 38.7755 | | 0.0 | 5000.0 | 10000 | 1.6303 | 38.7755 | | 0.0 | 5250.0 | 10500 | 1.6312 | 38.7755 | | 0.0 | 5500.0 | 11000 | 1.6982 | 38.7755 | | 0.0 | 5750.0 | 11500 | 1.7714 | 42.8571 | | 0.0 | 6000.0 | 12000 | 1.8436 | 42.8571 | | 0.0 | 6250.0 | 12500 | 1.7950 | 44.8980 | | 0.0 | 6500.0 | 13000 | 1.8284 | 44.8980 | | 0.0 | 6750.0 | 13500 | 1.8639 | 44.8980 | | 0.0 | 7000.0 | 14000 | 1.8944 | 44.8980 | | 0.0 | 7250.0 | 14500 | 1.7909 | 44.8980 | | 0.0 | 7500.0 | 15000 | 1.7688 | 44.8980 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
justinminlee/ppo-Huggy
justinminlee
2024-11-26T13:14:48Z
13
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-11-26T13:14:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: justinminlee/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
mindlywork/Mastering_Manicure
mindlywork
2024-11-26T13:02:49Z
370
4
diffusers
[ "diffusers", "text-to-image", "stable-diffusion", "lora", "template:sd-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:apache-2.0", "region:us" ]
text-to-image
2024-09-16T12:33:24Z
--- tags: - text-to-image - stable-diffusion - lora - diffusers - template:sd-lora widget: - text: '-' output: url: images/Screenshot 2024-07-19 104440.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: nail license: apache-2.0 --- # Mastering_Manicure <Gallery /> ## Model description Mastering_Manicure ## Trigger words You should use `nail` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/dasdsff/Mastering_Manicure/tree/main) them in the Files & versions tab.
griffio/vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-001
griffio
2024-11-26T12:59:37Z
197
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:google/vit-large-patch16-224", "base_model:finetune:google/vit-large-patch16-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-26T12:53:25Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-001 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9938775510204082 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-large-patch16-224-dungeon-geo-morphs-0-4-26Nov24-001 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0282 - Accuracy: 0.9939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-------:|:----:|:---------------:|:--------:| | 1.2849 | 4.4444 | 10 | 0.6545 | 0.8837 | | 0.2089 | 8.8889 | 20 | 0.1889 | 0.9694 | | 0.0278 | 13.3333 | 30 | 0.0619 | 0.9878 | | 0.0034 | 17.7778 | 40 | 0.0349 | 0.9918 | | 0.0012 | 22.2222 | 50 | 0.0282 | 0.9918 | | 0.0008 | 26.6667 | 60 | 0.0282 | 0.9939 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
platzi/platzi-vit-model-Daniel-Sarmiento
platzi
2024-11-26T12:56:08Z
192
0
transformers
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "base_model:finetune:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-25T14:19:43Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy widget: - src: https://huggingface.co/platzi/platzi-vit-model-Daniel-Sarmiento/resolve/main/healthy.jpeg example_title: Healthy - src: https://huggingface.co/platzi/platzi-vit-model-Daniel-Sarmiento/resolve/main/bean_rust.jpeg example_title: Bean_rust model-index: - name: platzi-vit-model-Daniel-Sarmiento results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-Daniel-Sarmiento This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0243 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 0.1296 | 3.8462 | 500 | 0.0243 | 0.9850 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
kikaigakushuu/Question_model_notitle
kikaigakushuu
2024-11-26T12:55:43Z
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:retrieva-jp/t5-base-medium", "base_model:finetune:retrieva-jp/t5-base-medium", "license:cc-by-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-26T07:22:36Z
--- library_name: transformers license: cc-by-sa-4.0 base_model: retrieva-jp/t5-base-medium tags: - generated_from_trainer metrics: - rouge model-index: - name: Question_model_notitle results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Question_model_notitle This model is a fine-tuned version of [retrieva-jp/t5-base-medium](https://huggingface.co/retrieva-jp/t5-base-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.4010 - Rouge1: 0.0884 - Rouge2: 0.024 - Rougel: 0.0883 - Rougelsum: 0.0887 - Gen Len: 16.0081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.7738 | 1.0 | 7858 | 1.4773 | 0.0796 | 0.0215 | 0.0794 | 0.0799 | 16.2695 | | 1.6313 | 2.0 | 15716 | 1.4103 | 0.0832 | 0.024 | 0.0828 | 0.0833 | 16.0633 | | 1.6204 | 3.0 | 23574 | 1.4010 | 0.0884 | 0.024 | 0.0883 | 0.0887 | 16.0081 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.1.0 - Tokenizers 0.19.1
LLAAMM/pixart-alpha-2x512x512-lora10kft
LLAAMM
2024-11-26T12:52:41Z
35
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "en", "dataset:LLAAMM/text2image10k", "base_model:PixArt-alpha/PixArt-XL-2-512x512", "base_model:finetune:PixArt-alpha/PixArt-XL-2-512x512", "license:openrail", "diffusers:PixArtAlphaPipeline", "region:us" ]
text-to-image
2024-11-26T11:50:56Z
--- license: openrail datasets: - LLAAMM/text2image10k language: - en base_model: - PixArt-alpha/PixArt-XL-2-512x512 pipeline_tag: text-to-image library_name: diffusers --- <<<<<<< HEAD --- license: openrail --- ======= --- license: creativeml-openrail-m base_model: PixArt-alpha/PixArt-XL-2-512x512 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA text2image fine-tuning - doanletuanthanh/PixartAlpha512 These are LoRA adaption weights for PixArt-alpha/PixArt-XL-2-512x512. The weights were fine-tuned on the LLAAMM/Text2Image10K dataset. You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
ahmedelsayed/probe-detr-5ep-simulation
ahmedelsayed
2024-11-26T12:50:10Z
192
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-11-26T12:49:44Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML-F16-GGUF
beomi
2024-11-26T12:49:29Z
12
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "llama-cpp", "gguf-my-lora", "en", "base_model:beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML", "base_model:quantized:beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-26T12:49:26Z
--- base_model: beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML tags: - text-generation-inference - transformers - unsloth - llama - trl - llama-cpp - gguf-my-lora license: apache-2.0 language: - en --- # beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML-F16-GGUF This LoRA adapter was converted to GGUF format from [`beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML`](https://huggingface.co/beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML) via the ggml.ai's [GGUF-my-lora](https://huggingface.co/spaces/ggml-org/gguf-my-lora) space. Refer to the [original adapter repository](https://huggingface.co/beomi/KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML) for more details. ## Use with llama.cpp ```bash # with cli llama-cli -m base_model.gguf --lora KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML-f16.gguf (...other args) # with server llama-server -m base_model.gguf --lora KoAlpaca-RealQA-Solar-Ko-Recovery-11B-LoRA-ChatML-f16.gguf (...other args) ``` To know more about LoRA usage with llama.cpp server, refer to the [llama.cpp server documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).
briannlongzhao/sesame_street_textual_inversion
briannlongzhao
2024-11-26T12:45:08Z
12
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-15T09:15:42Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - briannlongzhao/sesame_street_textual_inversion These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
t8star/FLUX.1-Fill-dev-GGUF-Aix
t8star
2024-11-26T12:43:51Z
340
10
null
[ "gguf", "region:us" ]
null
2024-11-24T03:39:01Z
BF16 -0.0050 ppl GGUF Q8_0 +0.0026 ppl GGUF Q6_K +0.0217 ppl GGUF Q4_K_M +0.1754 ppl Author: T8star Aix Bilibili link: https://space.bilibili.com/385085361 Youtube: https://www.youtube.com/ @T8star-Aix Free Knowledge Planet: https://t.zsxq.com/7F90A Official website: http://aix.studio/ Telegram group: https://t.me/ +mZ5Z-Kf_TH9lZjE1 Openart workflow: https://openart.ai/workflows/profile/t8star LibLib AI workflow: https://www.liblib.art/userpage/f572a7d9aeaa48a7b406fc46a814d479/publish/workflow Github repository: https://github.com/T8star1984/Comfyui-Aix-NodeMap WeChat official account: Aix知识星球 ------------------------------------------------------ origin Repo:https://huggingface.co/black-forest-labs/FLUX.1-Fill-dev
performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF
performanceoptician
2024-11-26T12:42:52Z
7
0
nemo
[ "nemo", "gguf", "llama-cpp", "gguf-my-repo", "en", "base_model:nvidia/Nemotron-Mini-4B-Instruct", "base_model:quantized:nvidia/Nemotron-Mini-4B-Instruct", "license:other", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-11-26T12:42:39Z
--- license: other license_name: nvidia-community-model-license license_link: https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct/blob/main/nvidia-community-model-license-aug2024.pdf language: - en base_model: nvidia/Nemotron-Mini-4B-Instruct library_name: nemo tags: - llama-cpp - gguf-my-repo --- # performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF This model was converted to GGUF format from [`nvidia/Nemotron-Mini-4B-Instruct`](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nvidia/Nemotron-Mini-4B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF --hf-file nemotron-mini-4b-instruct-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF --hf-file nemotron-mini-4b-instruct-iq4_xs-imat.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF --hf-file nemotron-mini-4b-instruct-iq4_xs-imat.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo performanceoptician/Nemotron-Mini-4B-Instruct-IQ4_XS-GGUF --hf-file nemotron-mini-4b-instruct-iq4_xs-imat.gguf -c 2048 ```
Ai4des/autotrain-kjxi3-hql8x
Ai4des
2024-11-26T12:37:53Z
114
0
transformers
[ "transformers", "tensorboard", "safetensors", "mpnet", "question-answering", "autotrain", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "endpoints_compatible", "region:us" ]
question-answering
2024-11-26T12:26:52Z
--- library_name: transformers tags: - autotrain - question-answering base_model: sentence-transformers/all-mpnet-base-v2 widget: - text: "Who loves AutoTrain?" context: "Everyone loves AutoTrain" --- # Model Trained Using AutoTrain - Problem type: Extractive Question Answering ## Validation Metrics loss: 5.707379341125488 exact_match: 0.0 f1: 0.0 runtime: 13.0624 samples_per_second: 0.766 steps_per_second: 0.077 : 3.0 ## Usage ```python import torch from transformers import AutoModelForQuestionAnswering, AutoTokenizer model = AutoModelForQuestionAnswering.from_pretrained(...) tokenizer = AutoTokenizer.from_pretrained(...) from transformers import BertTokenizer, BertForQuestionAnswering question, text = "Who loves AutoTrain?", "Everyone loves AutoTrain" inputs = tokenizer(question, text, return_tensors='pt') start_positions = torch.tensor([1]) end_positions = torch.tensor([3]) outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) loss = outputs.loss start_scores = outputs.start_logits end_scores = outputs.end_logits ```
mradermacher/Calmex26merge-12B-MoE-GGUF
mradermacher
2024-11-26T12:36:58Z
6
0
transformers
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/MultiMerge-7B-slerp", "allknowingroger/Calmex26-7B-slerp", "en", "base_model:allknowingroger/Calmex26merge-12B-MoE", "base_model:quantized:allknowingroger/Calmex26merge-12B-MoE", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-11-26T08:23:12Z
--- base_model: allknowingroger/Calmex26merge-12B-MoE language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - moe - frankenmoe - merge - mergekit - lazymergekit - allknowingroger/MultiMerge-7B-slerp - allknowingroger/Calmex26-7B-slerp --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> static quants of https://huggingface.co/allknowingroger/Calmex26merge-12B-MoE <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q4_0_4_4.gguf) | Q4_0_4_4 | 7.4 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Calmex26merge-12B-MoE-GGUF/resolve/main/Calmex26merge-12B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
khilan-crest/twitter-roberta-base-sentiment-latest_26112024T175016
khilan-crest
2024-11-26T12:36:43Z
117
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:cardiffnlp/twitter-roberta-base-sentiment-latest", "base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T12:35:34Z
--- library_name: transformers base_model: cardiffnlp/twitter-roberta-base-sentiment-latest tags: - generated_from_trainer metrics: - f1 model-index: - name: twitter-roberta-base-sentiment-latest_26112024T175016 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-sentiment-latest_26112024T175016 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1756 - F1: 0.6208 - Learning Rate: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 200 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Rate | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:| | No log | 1.0 | 315 | 1.0763 | 0.5542 | 0.0000 | | 1.2037 | 2.0 | 630 | 0.9747 | 0.6378 | 0.0000 | | 1.2037 | 3.0 | 945 | 1.0738 | 0.6226 | 0.0000 | | 0.7714 | 4.0 | 1260 | 1.1502 | 0.6191 | 0.0000 | | 0.5374 | 5.0 | 1575 | 1.1756 | 0.6208 | 0.0 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
jpalgo/MARK-2000-QLORA-v4
jpalgo
2024-11-26T12:36:05Z
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T12:29:32Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
crisp-im/mirage-phi3-instruct-rank
crisp-im
2024-11-26T12:32:39Z
134
0
transformers
[ "transformers", "safetensors", "phi3", "text-generation", "llama-factory", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T12:30:05Z
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
muhammadIsmail/llama_3.2_3b_RU-Classifier_26-11-2024
muhammadIsmail
2024-11-26T12:20:36Z
134
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T12:17:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GyuBack/multilingual-e5-large-instruct-FT_klue_mrc_train
GyuBack
2024-11-26T12:14:37Z
9
0
sentence-transformers
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:17552", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:1705.00652", "base_model:intfloat/multilingual-e5-large-instruct", "base_model:finetune:intfloat/multilingual-e5-large-instruct", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-26T12:12:58Z
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:17552 - loss:MultipleNegativesRankingLoss base_model: intfloat/multilingual-e5-large-instruct widget: - source_sentence: 지금까지도 전통도자기를 제조하는 도예가들 다수가 살고 있는 곳은? sentences: - "1884년 나마쿠아(현재의 나미비아)를 식민지화 한 독일은 1904년 수탈에 참다 못해 봉기한 헤레로인을 독일 해군을 동원해 학살하였다.\ \ 그 이후로부터 현대사에서, 나미비아의 헤테로 족이 독일 정부로부터 공식적인 사과를 받는 것은 과거 제국주의 침략을 제대로 식민 전쟁과 대량\ \ 학살로써 인정받는 것인 만큼 커다란 염원이었다. 헤레로인은 베를린의 독일 기업들이 과거 나미비아에 수천 만 달러의 피해를 입힌 것에 대해\ \ 미국 국제 재판소에 소송을 했었다.\n\n독일 정부는 이 학살에 대해 1951년 '인종 학살 범죄에 관한 유엔 협약'이 발효되기 전의 사건이라며\ \ '인종 학살 (Genocide)'라는 표현을 쓰는 것을 꺼렸다. 이는 나치 독일의 유대인 인종 학살에 대한 태도와 상반되었고, 이는 독일이\ \ 프랑스, 이스라엘 등 강자에게만 사과한다는 비판으로 이어졌다. 2016년 6월, 독일 연방 의회가 오스만 튀르크의 아르메니아인 학살을 인종\ \ 학살로 인정하는 결의안을 택하자, 터키 공화국의 레제프 타이이프 에르도안 대통령은 \"독일은 나미비아 학살에 대해서나 얘기해 보라\"고\ \ 비판했다. 이후, 독일 외무부 피셔 장관은 나미비아 방문 중, 독일에 배상금을 물게 할 수도 있다는 우려에서 공식적인 사과를 거절하였다.\n\ \n2004년 독일 경제 장관 비초레크초일 하이데마리가 학살 100주년 추모식에 \"역사적이고 도덕적인 책임을 인정한다\"라고 발언하였으나\ \ 도덕적 차원의 사과였을 뿐, 정부 주도의 사과는 아니었다.\n\n또한 지금까지도 공식 사과는 인정하지 않은 채, 독일이 나미비아가 남아프리카공화국에서\ \ 독립한 1990년 이후 수억 유로에 달하는 원조를 했다며 공식 사과와 배상을 거부해왔다. 단지, 나미비아에 대해 1884년부터 1915년까지의\ \ 식민통치만을 공식 인정하고, 인종 학살에 대해서는 사과를 않고 있었다.\n\n하지만 2016년 7월 13일 (독일 현지시간), 독일 정부는\ \ 2016년에서 112년 전인 1904년 아프리카 남서부 나미비아에서 저지른 집단 살해 행위를 '인종학살 (Genocide)'로 인정하고\ \ 공식 사과하기로 결정했다. 독일 외교부는 나미비아 정부와 공동으로 2016년 말까지 공동 선언문을 완성해 발표할 것이라고 밝혔다.\n\n\ 독일 외교부는 이번 독일-나미비아 공동선언문의 사과는 법적 배상의 근거가 되지 않을 것이라 밝혔다. 이번에도 배상은 하지 않은 채로, 담수처리\ \ 시설 등 인프라 건설을 지원하는 방안을 고려 중이라 밝혔다. \n\n이번 독일 정부의 나미비아 헤레로인과 정부에 대한 공식적인 사과는 배상이\ \ 아닌 인프라 건설을 지원하는 방식으로 배상한다는 한계점이 있다. 하지만, 유럽 이외의 피해 국가나 제2차 세계 대전 이전 피해 국가에 대한\ \ 공식 사과를 함으로써, 지금까지 제국주의 국가들의 '그땐 다들 그랬지' 식으로 피해 국가에 대한 공식 사과를 하지 않은 채, 패권 국가들끼리\ \ 서로가 서로를 묵인해주는 체제를 깨뜨릴 수 있는 선례가 된 것이다. 또한, 이러한 독일의 선례는 일본이 제1차 세계 대전과 만주사변, 난징\ \ 대학살, 제2차 세계 대전당시 일본 제국의 대한민국, 중화인민공화국, 중화민국, 류큐국와 동남아시아 국가들에 대한 인종 학살과 침략 행위를\ \ 인정하지 않는 것과 대조된다." - '예부터 단양군 대강면 방곡리 일대는 조선시대의 민수용 도자기의 집산지로 알려진 곳으로 현재도 전통도자기를 만들고 있는 도예가들이 많이 생활하고 있다 서동규(徐東圭)는 1956년부터 선친(先親) 서병욱(徐炳旭)의 가업인 방곡도예에 입문하여 전통도자기에 관심을 갖고 전통도자 수업을 시작하였으며, 일생동안 단양 방곡에서만 전통도예방식으로 활발한 작업활동을 하고 있다. 전통도자기 전승계보가 뚜렷하고, 전승자의 전승의지가 확고하며 독자적인 도예기술을 확보하여 독창적인 도예기술이 성숙단계이 이르고 있다 초기에는 "다완(茶碗)"을 중심으로 찻그릇 제작에 힘을 쏟아 일본 애용가들의 호응을 받았고, 특히 짙은 갈색이 발색되는 종래의 도자기와 달리 방곡의 특유한 토질에 느릅나무 외 수종(數種)의 나무를 태운 재를 원료로 독특한 황녹색으로 발색시킨 녹자의 재현으로 1999년 특허청 특허등록되었으며, 2000년 노동부 명장 제28호로 선정되었다 서동규(徐東圭)는 단양 방곡에서 출생하여 현재까지 이곳에서 3대째(徐炳旭→徐東圭→徐贊紀) 도예의 맥을 이어오고 있으며, 녹자, 생활자기, 이조다왕 등을 전통방식에 의한 장작가마만을 고수하며, 전통도자기 복원을 위하여 활발하게 작업활동을 하고 있다 느릅나무 재를 유약원료로 이용하여 독특한 기법으로 황녹색을 발색시킨 독창적인 도예기술을 개발하여 이를 특허 등록함으로써 독자적인 도예기술을 확보하였고 꾸준한 도예기술개발과 전통도예 복원을 위하여 노력하고 있다 단양군 대강면 "방곡리(傍谷里)" 마을은 먹을 것이 풍부하여 뒷방에 음식물을 가득쌓아 두었다는 데서 유래된 이름이다. 이곳에서 17세기경부터 백자와 분청사기를 생산하여 조선시대 민수용 도자기를 만들어온 마을로 지금도 농경지에서 백자편들을 많이 발견할 수 있다 저잣거리, 빗재 등 지역의 유래에서 도자기 제작 및 판매시장이 형성되었음을 알 수 있으며, 지금도 옛 가마터가 있다' - 6·4 지방선거를 앞두고 재원 대책 없는 선심성 공약이 쏟아지고 있다. ‘무상버스’ ‘100원 택시’ 같은 무상 시리즈 공약부터 광역급행철도(GTX)처럼 수조원이 들어가는 공약도 등장했다. 하나같이 이렇다 할 재원 대책은 없다. 국회는 이런 것을 막기 위해 2012년 의원입법으로 ‘페이고’ 법안을 발의했다. 하지만 1년6개월이 지나도록 논의조차 이뤄지지 않고 있다. 페이고(pay-go)는 ‘pay as you go(번 만큼 쓴다)’의 줄인 말로, 중앙이나 지방 정부가 새로운 재정 지출 사업을 추진할 때 이에 상응하는 세입 증가나 지출 축소 등 재원 조달 방안을 동시에 마련하도록 의무화하는 것이다. 국회가 페이고 도입에 미온적인 데는 이유가 있다. 현재 계류돼 있는 관련 법안은 페이고 원칙을 정부는 물론 의원입법에도 적용하자는 것이다. 의원들이 지역구의 표를 얻기 위해 ‘포퓰리즘(대중 인기 영합주의)’ 법안을 양산해 국가의 재정건전성을 해치는 것을 막자는 취지다. 하지만 야당은 물론 여당 일각에서도 페이고가 도입되면 자신들의 권한이 축소될 것이란 이유로 반대하는 기류가 강하다. 페이고 원칙 없이 예산이 낭비되면 피해는 국민에게 돌아온다. 서울시는 2010년 지방선거 당시 공약으로 내건 ‘3무정책’(무상보육·무상급식·무상의료)에 따라 보편적인 복지를 시행하다 보니 예산이 부족해 올봄부터 저소득층이 이용하는 초등학생 돌봄교실 혜택을 줄였다. 표를 얻기 위해 재원 대책 없이 내놓은 공약 때문에 정작 필요한 곳에 지원해야 할 예산이 ‘펑크’난 것이다. 민경국 강원대 경제학과 교수는 “지자체의 복지 사업도 중앙정부의 매칭 지원이 없으면 시행할 수 없는 만큼 국회에서 재정준칙에 따라 정부 사업을 감시했다면 무상복지 같은 무리한 공약은 애초 태어날 수 없었을 것”이라고 말했다. 안충영 전 규제개혁위원장(중앙대 석좌교수)은 “정치적 목적의 입법을 막도록 국회 스스로 규제하는 장치가 시급하다”고 강조했다. - source_sentence: 체크포인트에 와서 총을 쏜 것은 누구인가? sentences: - "반정부 시위대가 점령한 체크포인트에서 총격전이 발생하면서 부활절 휴전이 깨졌다. 알 수 없는 무장 괴한들이 차 4대를 가지고 체크포인트에\ \ 도착한 이후, 헤드라이트를 켜야 해야 한다고 말하고 ID카드를 꺼내고 트렁크 검사를 해야 한다고 트렁크를 연 직후 발포했다. 휴전 때문에,\ \ 체크포인트에 있던 지역 시위대들은 무장 세력이 박쥐로 분장했다고 말했다. 슬로비얀스크 자위대 20명이 공격자를 격퇴하기 위해 지원을 왔다.\ \ 그들이 도착한 이후, 괴한 2명을 살해하고 나머지 괴한 2명은 하르키우 방향으로 차를 타고 현장을 떠났다고 말했다 또한, 이 공격 현장에서\ \ 우익 섹터의 심벌로 보이는 물건들이 발견되었다고 말했다. 스카이 뉴스의 기자 케이티 스텔라드는 분리주의자들의 말들이 서로 불일치하고 그들의\ \ 말이 일관된다는 증거가 거의 없다고 보고했다. BBC 뉴스의 다니엘 스탠포드는 이들이 제시한 증거들이 '반신반의'한 것이라고 말했다. \n\ \n4월 20일 저녁, 러시아의 라이프 뉴스에서는 4월 19일 유튜브에 업로드 된 영상 및 사건의 보고서를 보여주었다. 이 사건의 비디오 보고서는\ \ 현지 시간 4월 20일 오전 2시에 촬영되었음에도 불구하고 일광이 보이는 영상이 포함되어 있다. 러시아 TV가 발표한 비디오는 실제 우익\ \ 섹터의 공격이 있기 10시간 전에 촬영한 것으로, 러시아 카메라맨이 실수로 지우는 것을 잊었다는 주장으로 타임 스탬프를 보여준 것으로 증명되었다.\ \ \n\n우크라이나 내무부는 적어도 분리주의자 3명이 사망했고 3명이 부상했으며 이 중에는 러시아 요원들도 포함되어 있는 것으로 의심된다고\ \ 말했다." - '키요미는 히즈루국의 대사관이자 방계 출신으로서 직계가 끊겨져 대마저 끊겨질 위기에 놓인 아즈마비토 가문의 수장으로서 살아가면서 무슨 연유에서인지 아즈마비토 쇼군의 후계를 이을 정통 후계자를 물색하는 데에 혈안이 되어 있었다. 그러던 어느 날, 마레군 전사대의 전사장인 지크 예거가 키요미에게 접근하여 최근에 알려진 파라디 섬에 남은 쇼군의 후예를 담보로 둘만의 비밀 거래를 시도한다. 그렇찮아도 행방불명된 쇼군의 직계 후손을 찾아내서 후계자로 간절히 삼고 싶었던 키요미는 거래를 받아들이고 지크와 비밀리에 히즈루 본국으로 추정되는 어떤 장소 에서 비밀리에 접촉한다. 키요미는 복권파를 운영하던 부모를 일곱 살 나이에 폭로한 지크의 명성을 익히 알고 있었다. 키요미는 지크가 알려 준 중대한 진실 을 전해듣고, 처음에는 골수 엘디아 복권파라고 주장하던 그가 부모와 복권파를 배신하고 밀고했다는 것에 의아해 했지만 얼마 안 가 치안 보안 당국에 의해 복권파가 거의 발각되기 직전까지 가자 자신만이라도 그 유지를 잇기 위해 선택의 여지 없이 밀고를 감행했다는 걸 알게 된다. 철저히 이익 지향적인 키요미는 지크가 왕가의 후손이라는 사실이 가문에게 도움이 되지 않는다면 주저하지 않고 마레군에 그 정보를 넘기겠다고 협박하는 자세를 취한다. 이럼에도 지크는 하나도 동요하지 않고 오히려 여유로운 태세를 갖추며 키요미에게 환심을 살 어떤 물건을 보여 주는데, 다름 아닌 지크가 주워 온 미케 자카리아스의 입체기동장치 가스 봄베였다. 입체기동장치에는 핵심적인 동력 자원이자 파라디 섬에 유일하게 매장된 희귀자원인 빙폭석이 들어 있었고 빙폭석에 대한 정보를 입수한 상태였던 키요미는 손을 입에 댈 정도로 놀라워한다. 빙폭석을 보고서 눈을 빛낸 키요미는 지크로부터 파라디 섬에 이 빙폭석만 아니라 먼 옛날에 한 거인의 왕이 깊숙이 숨겨둔 비밀 지하동굴에 더 많은 귀한 광물과 보석들을 숨겼다는 이야기를 듣게 된다. 파라디와 다시 수교하면 빙폭석을 생산하는 건 물론이고 가세가 기울어져 가는 아즈마비토 가문과 재벌을 부흥시킬 거라는 말에 파라디와의 수교를 결심하게 된다. 그리고 거래 제안을 완전히 수락하고 그가 서류 형태로 제공한 비책들을 전해 받으며, 마침내 851년, 히즈루국의 증기선을 이끌고 부하들과 보좌관들과 함께 파라디 섬 항구에 도착한다. 제145대 프리츠 왕의 무저항주의 정책으로 강제로 단교되었던 파라디 섬과 히즈루국의 외교 관계가 103년 만에 다시 회복되는 기념비적인 날이었다.' - 스페이스리버는 쇼핑몰 통합관리 솔루션 개발사인 이비즈웨이와 손잡고 사용자들의 상품관리부터 출고까지의 물류 관리 서비스를 강화하는 업무협약(MOU)를 체결했다고 6일 밝혔다. ‘노스노스’는 스페이스리버가 개발한 WMS (물류관리 시스템)이다. 온라인상에서 재고 및 유통기간, 입, 출고를 관리하고 이비즈웨이의 쇼핑몰 관리 솔루션인 ‘비젬’과 연동하여 온오프라인의 판매채널과 물류창고 내 상품 상황을 통합관리 할 수 있다. 이번 제휴를 통해서 노스노스와 비젬이 지원되는 솔루션을 이용중인 셀러라면 노스노스에서 비젬의 주문수집과 운송장 송신을 제한없이 이용할 수 있게된다. 오픈마켓, 종합몰, 전문몰 등 70개 이상의 다양한 온라인 채널의 재고, 상품정보가 실시간 연동돼 시간과 비용을 절약할수 있게 되었다. 이번 제휴로 기존의 노스노스가 제공하는 WMS 기능의 편리성을 향상시켜 다수의 판매채널에서 수집된 정보를 비젬에서 취합하고 이를 클릭 한번으로 노스노스와 연동 및 출고지시까지 자동으로 진행된다. 기존에 엑셀로 수기 작성하여 업, 다운로드하던 주문 관리의 어려움을 간소화시킬 수 있게 되었다. - source_sentence: 아트 서울-김과장 전시장 가는 날'에 전시된 대략적인 작품 수는? sentences: - 과장 명함을 가진 직장인은 물론 동반 가족까지 무료로 감상할 수 있는 그림장터 ‘아트 서울-김과장 전시장 가는 날’이 6~21일 서울 예술의전당 한가람미술관 2, 3층에서 펼쳐진다. 아트컴퍼니 마니프(대표 김영석)가 마련한 이 행사에는 연령이나 성별, 구상과 비구상, 회화와 입체 등 특별한 제한 없이 모든 장르의 유망 작가 136명이 부스별 개인전 형식으로 회화·조각·설치 작품 2500여점을 전시, 판매한다. 한국국제아트페어(KIAF)나 화랑미술제의 경우 화랑들이 각 부스에서 소속 작가들의 작품을 내거는 데 비해 마니프의 ‘아트서울’ 아트페어에선 작가들이 부스를 열고 전시장에 매일 나와 관람객을 맞이하며 작품을 판매한다. 마음에 드는 작품이 있으면 작가에게 설명을 들을 수 있고 대화도 나눌 수 있다. 주최 측은 최근 경기 침체와 샐러리맨들의 주머니 사정을 감안해 출품작의 90%인 2000여점의 작품 가격을 점당 10만~1000만원으로 책정했다. 나머지 작품도 대부분 4000만원 이하에 나오며 모든 출품작은 정찰제로 판매한다. 김 대표는 “미술시장 활성화와 전시문화의 새로운 대안 제시를 위해 유망한 신진·중견작가를 많이 초대했다”고 말했다. 관람료 어른 6000원, 학생 5000원. (02)514-9292 - 현정은 현대그룹 회장(사진)이 1년 만에 금강산을 찾을 것으로 알려졌다. 내달 4일 열리는 정몽헌 회장의 11주기 추모식에 참석하기 위해서다. 얼어붙은 남북관계를 개선하는 계기가 될 수 있을지 관심이 쏠리고 있다.현대아산 관계자는 28일 “현 회장과 조건식 현대아산 사장 등 현대그룹 관계자 20여명의 방북 신청서를 조만간 통일부에 접수할 계획”이라고 말했다. 현 회장이 방북하는 것은 지난해 정 회장의 10주기를 맞아 금강산을 찾은 지 1년 만이다.금강산에서는 매년 8월4일 정 회장 추모식이 열린다. 현 회장은 2009년 11월 금강산 관광 11주년 기념행사에 참가한 뒤 금강산을 찾지 않다가 지난해 10주기를 계기로 금강산에서 열리는 추모식에 참석했다. 특히 지난해에는 김정은 북한 국방위원회 제1위원장의 편지를 대리인이 낭독하는 ‘구두친서’를 받기도 했다.지난해에는 10주기라는 상징성 때문에 현대그룹 계열사 최고경영자(CEO)들이 모두 참석해 금강산 추모식 참석 인원이 38명에 달했지만 올해는 방북단 규모가 그 절반 정도로 줄어들 것으로 예상된다.관련 업계는 현 회장의 방북이 경색된 남북관계 개선에 조금이라도 도움이 되기를 기대하고 있다. 현대아산 관계자는 “이번 방북은 추모식에 참석하고 사업현장 시설을 점검하려는 목적”이라며 “아직 일정이 최종 확정된 것은 아니다”고 말했다. 이상은 기자 - '미르너는 아들의 양육을 보그말과 리어흐 루어크라라는 여전사에게 맡겼다. 두 여인은 아이를 슬리어우 블라드머 숲에 숨겨 기르면서 싸움과 사냥을 가르쳤다. 어느 정도 나이를 먹자 신분을 숨긴 채 군인으로 복무했는데, 어디를 가든 더이니가 쿠월의 아들이라는 것이 밝혀지면 그를 지켜줄 수 없다며 왕들이 그를 내쳐서 여러 소왕국을 전전했다. 더이니는 보인 강 근처에서 레프리컨 같은 드루이드이자 시인인 핀 에케스를서 만났고 그 밑에서 배웠다. 핀 에케스는 지식의 연어를 잡으려고 7년째 시도하고 있었다. 지식의 연어는 보인 강에 사는 물고기인데, 강에 떨어지는 성스러운 개암나무 열매를 받아먹었다. 때문에 이 연어를 잡아먹으면 세상의 모든 지식을 얻게 될 것이라고 했다. 마침내 연어를 잡은 핀 에케스는 더이니에게 연어를 요리해 오라고 시켰다. 요리를 하던 도중 엄지손가락에 연어 기름이 튀자 더이니는 손가락을 입에 넣고 빨았다. 이로 인해 연어의 지식이 더이니에게 흘러들어갔다. 더이니가 연어의 지혜를 얻은 것을 본 핀 에케스는 어린 더이니에게 연어를 다 먹으라고 주었다. 이 때 얻은 지식으로 핀 막 쿠월은 어떻게 해야 생부의 원수 골에게 복수할 수 있을지를 알아냈다. 그 뒤로도 핀은 연어의 지혜를 떠올려야 할 때면 처음 연어의 맛을 보았을 때처럼 엄지손가락을 입술 위에 올리게 되었다. 핀과 연어 이야기는 웨일스의 그위온 바흐의 이야기와 유사하다.' - source_sentence: 세션 3의 토론에 참여한 사람은? sentences: - ㈜이건창호(대표 김재엽)가 품격이 다른 알루미늄 시스템 현관도어 ‘ADS 70 AP(Aluminum Door System 70 AP)’를 출시했다고 17일 밝혔다. 이건창호의 30여 년의 노하우를 담은 제품으로 단독주택 및 갤러리, 상업시설 등 다양한 건물에 사용할 수 있으며 단순한 출입문을 넘어 공간의 첫 인상을 한층 더 세련되게 만들어주는 프리미엄 도어이다. ADS 70 AP는 견고한 알루미늄 소재의 프레임에 알루미늄 판넬과 디자인 3중유리를 탑재해 도어의 기본 기능인 보안성은 물론 감각적인 디자인과 세련된 스타일을 겸비했다. 또한 알루미늄 시스템 하드웨어를 적용하여 우수한 단열성과 기밀성능을 모두 갖추었다. 신제품의 유리는 실외와 실내면이 다른 특수 유리가 사용됐다. 실외 면은 투과율 0%의 골드사틴 유리로 시원한 느낌을 주면서도 뛰어난 보안성을 갖췄으며, 실내면은 반사 유리가 사용돼 현관 앞 공간을 확장되어 보이게 하고 외출 시 전신거울로 활용할 수도 있다. 이와 함께 도어 클로저(열린 문을 자동으로 닫아주는 장치)는 매립형 방식을 사용해 돌출형 도어 클로저에 비해 디자인이 깔끔하며, 다양한 각도에서 정지가 가능하고 안전하게 닫힌다. 또한, 실내 핸들은 독일 슈코사의 프리미엄 핸들을 채택해 자동 잠금 기능이 있고, 여성, 노약자, 아이들의 작은 힘에도 쉽게 열고 닫을 수 있다. 그리고 지문인식, 번호, RFID 카드 사용이 가능한 디지털 도어락 적용으로 보안성도 한층 높였다. 도어 디자인은 소재의 다양성에 모던 감각을 입힌 모던유로스타일 트렌드를 반영했다. 취향에 따라 ▲메탈릭 챠콜 ▲메탈릭 골드 실버 ▲메트로 브론즈 ▲리갈 블루 등 도시적인 4가지 색상의 판넬과 ▲골드 사틴 ▲브론즈 반사 ▲브론즈 미스트 등 3가지 종류의 유리를 조합할 수 있다. 또, 개폐 방식 역시 외닫이와 양여닫이 스타일 중 선택 가능하다. 이건창호 관계자는 “최근 건물의 외관 디자인이 강조되며 도어도 가치 있는 건물을 완성하기 위한 위한 중요한 인테리어 아이템으로 자리잡고 있는 추세”라며 “이건창호 ADS 70 AP는 앞으로 시스템 도어 트렌드를 이끌어갈 대표적 아이템이 될 것”이라고 밝혔다. - "이팝나무란 이름은 꽃이 필 때 나무 전체가 하얀꽃으로 뒤덥여 이밥, 즉 쌀밥과 같다고 하여 붙여진 것이라고 하며, 여름이 시작될 때인 입하에\ \ 꽃이 피기 때문에 ‘입하목(立夏木)’이라 부르기 시작하여 입하목에서 입하나무를 거쳐 오늘의 이팝나무가 되었다고 한다. \n\n장승포 덕포리\ \ 이팝나무의 나이는 300년 정도로 추정되며, 높이는 15m, 둘레는 3m이다. 마을 안에서 자라고 있으며, 나무 곁에는 작은 돌무더기로\ \ 된 탑이 있다. 이 작은 탑들은 이곳 사람들이 마을의 평화와 모든 일이 잘 되길 기원하며 쌓았다고 하며, 예전에는 왜적이 침입할 때 방어용\ \ 무기로 사용했다고 한다.\n\n한국의 크고 오래된 이팝나무에는 거의 한결같은 이야기가 전해지고 있는데, 그것은 이팝나무의 꽃이 많이 피고\ \ 적게 피는 것으로써 그해 농사의 풍년과 흉년을 점칠 수 있다는 것이다. 이팝나무는 물이 많은 곳에서 잘 자라는 식물이므로 비의 양이 적당하면\ \ 꽃이 활짝 피고, 부족하면 잘 피지 못한다. 물의 양은 벼농사에도 관련되는 것으로, 오랜 경험을 통한 자연관찰의 결과로서 이와 같은 전설이\ \ 생겼다고 본다. \n\n장승포 덕포리 이팝나무는 크고 오래된 나무로 생물학적 가치가 높아 기념물에 지정되어 보호하고 있다." - 2013년 ‘세계 경제·금융 컨퍼런스’ 둘째날인 다음달 3일 열리는 세션 1과 세션 3도 알찬 주제발표에 이은 심도 있는 토론으로 청중의 ‘지식 갈증’을 시원하게 풀어줄 전망이다.‘저성장 시대의 세계 경제, 공정한 경쟁과 상생의 협력을 통한 회복과 새로운 도약’을 주제로 한 세션 1은 케이 베일리 허치슨 전 미국 상원의원, 리다오쿠이 중국 칭화대 세계경제연구센터 소장, 하마다 고이치 미국 예일대 명예교수가 20분씩 연설한 뒤 함께 열띤 토론을 벌인다.허치슨 전 의원은 20년간 미국 상원의원으로 일하면서 통상·과학·교통위원회 등에서 활동했다. 현재는 국제전략연구소 자문위원을 맡고 있다. 그만큼 미국 정부의 정책 이면을 엿볼 수 있는 분석과 전망을 내놓을 것으로 기대된다. 여기에다 박근혜 정부의 경제정책 밑그림을 그린 김광두 국가미래연구원 원장이 세션을 이끌어 흥미를 더한다.중국 인민은행 통화정책위원을 지낸 리 교수는 “중국의 생산가능인구(15~64세)가 줄어들기 시작하면서 임금 상승에 따른 인플레이션이 본격화하고 있다”고 진단했다. “중국 정부가 올해 물가상승률을 목표치인 3.5% 이내로 억제하기 쉽지 않을 것”이라는 시각도 갖고 있다. 중국의 인플레이션은 경기 회복이나 정부 통화정책 등의 일시적인 요인보다 경제구조 변화에 따른 결과라는 얘기다.리 교수는 각국이 벌이고 있는 ‘환율전쟁’에 대해서도 “선진국들이 경쟁적인 통화가치 평가절하에 나서고 있는 상황은 우려스럽다”고 얘기해왔다. 하마다 명예교수의 연설과 토론은 아베 신조 일본 총리의 경제정책을 직·간접적으로 전해 들을 수 있는 기회를 제공한다. 그는 아베 총리의 경제 브레인답게 최근 “지금은 일본 경제가 가속페달을 밟아야 할 때”라며 “디플레이션(지속적인 물가 하락) 탈출에 수반되는 부작용에 대해선 어느 정도 눈감아줄 필요가 있다”고 주장해 눈길을 끌었다.하마다 교수는 “통화정책은 효과가 즉시 나타난다는 점을 감안해 집중적으로 정책을 집행하는 게 바람직하다”고 말하기도 했다. 세션 3에서는 ‘세계 3대 산업디자이너’로 꼽히는 이스라엘의 아릭 레비가 ‘즐거움으로 경제를 디자인하다’는 주제로 특별강연을 한다. 레비는 이스라엘 출신으로 프랑스 파리에서 ‘L’ 디자인 스튜디오를 운영하고 있다. 명품 브랜드 까르띠에의 프랑스 파리 본사 건물 인테리어를 비롯해 가구업체인 비트라와 자노타, 르노자동차, 아디다스 등의 제품 디자인을 맡기도 했다. 레비는 한국경제신문과의 서면 인터뷰에서 “이탈리아풍으로 디자인한 살수 펌프의 경우 같은 디자인의 제품이 40억개 넘게 생산됐다”며 “잘 디자인한 제품은 그 자체로 경제 전체에 영향을 미친다”고 말했다. 창의적 디자인의 비결을 묻는 질문에 “디자인적 상상력은 일상생활에서 직접 다양한 활동을 하며 얻게 되는 동물적인 감각에서 나온다”며 “일상생활에서 실제 효용이 높은 것이 우수한 디자인”이라고 설명했다.그는 박근혜 정부의 ‘창조 경제’에 대해 “새로운 아이디어를 강조한다는 점에서 중요한 진전”이라고 평가하면서 “창의적이고 혁신적인 아이디어를 내놓기 위해서는 민간 부문의 자율성을 극대화해야 한다”고 당부했다. - source_sentence: 국방부 장관이 제시한 전작권 반환시기는 몇 년도인가? sentences: - 한국이 미국에 2015년으로 예정된 전시작전통제권 전환 시기를 다시 연기하자고 제안한 것으로 알려졌다.미국 국방부 고위 당국자는 17일 김관진 국방부 장관이 척 헤이글 국방장관에게 최근 전작권 전환의 재연기를 제안해 양국 정부가 이 문제를 협의하고 있다고 연합뉴스에 밝혔다. 이에 대해 한국 국방부 관계자는 “2013년 전반기에 심각해진 북한 핵 문제 등 안보 상황을 중요한 조건으로 고려하면서 전작권 전환 준비를 점검해 나가자고 미국 측에 제의해 한·미 간 논의 중에 있다”고 말했다. 그는 이어 “전작권 전환은 향후 한·미 안보협의회(SCM), 군사위원회의(MCM) 등을 통해 지속적으로 협의해 나갈 것”이라고 말했다.전작권 재연기론의 배경에는 북한이 지난해 말부터 핵실험을 강행하는 등 대남 전쟁위협 수위를 급격하게 높인 것을 꼽을 수 있다. 북한은 지난 2월 3차 핵실험 이후 정전협정을 백지화하겠다고 위협한 데 이어 ‘1호 전투근무태세’ 명령을 내리는 등 위협 강도를 끌어올렸다.정부가 전작권 전환 재연기를 제의한 시기가 지난 3월이며 김 장관이 샹그릴라 대화에서 헤이글 국방장관에게 전환 시기의 연기를 제의했다는 관측이 제기되기도 했다. 정부의 한 고위 관계자는 “올해 초 북한의 위협이 계속되는 등 남북관계 상황을 고려하지 않을 수 없었다”고 말해 전작권 전환시기의 재연기를 제의했음을 시사했다. 한·미 양국은 2006년 전작권을 2013년 전환하기로 합의한 뒤 2010년에 전환 시기를 2015년으로 연기했다. - "1982년 숭의여자고등학교를 졸업하고 창단팀 신용보증기금 농구단에 입단하였다. 숭의여고 시절 '초고교급 가드'로 일찌감치 인정받아 명문팀과의\ \ 계약이 유력시 되었지만 '여자 농구의 대모' 박신자를 감독으로 추대하고 고교 유망주들을 적극적으로 영입하는 등 다음 해 시작될 농구대잔치를\ \ 위해 적극적인 스카웃 노력을 기울인 신용보증기금과 결국 계약이 이루어졌다. 1982년 10월 필리핀 마닐라에서 열린 아시아 청소년 여자\ \ 농구 선수권 대회를 위한 청소년 대표팀에 선발되어 주전 가드로서 활약하였으나 대한민국팀은 중국에 밀려 은메달에 그쳤다. \n\n1984년\ \ 5월 쿠바 아바나에서 열린 프레올림픽에 처음 국가대표로 선발되었으며 1984년 LA 올림픽에서는 백업 가드로서 미국과의 결승전 등 세 경기에\ \ 교체 출장하며 은메달을 거머쥐었다. 그해 10월에는 중국 상하이에서 열린 아시아 여자 농구 선수권 대회에도 참가, 인도 전에서 최애영을\ \ 대신하여 베스트 5로 선발 출장 하는 등 백업 가드로 활약하며 대한민국팀의 대회 4연패에 일조하였다.\n\n1986년에도 국가대표팀에 선발되어\ \ 세계 여자 농구 선수권 대회와 아시안게임에 출전하지만 '만년하위팀' 의 오명을 극복하지 못한 소속팀의 부진으로 인해 대중들의 조명을 크게\ \ 받지는 못하였다. 1987-88 시즌 농구대잔치에서 국가대표 구정희와 함께 황금 가드 콤비를 이루며 잠시 돌풍을 일으키기도 했지 신생팀의\ \ 핸디캡과 포스트진의 부재로 우승권에 근접하지 못하고 1988-89 시즌을 끝으로 은퇴를 선언하였다." - 경남도가 매년 경영 손실로 약 300억원에 이르는 부채를 감당할 수 없다는 이유로 도립 진주의료원(사진)을 폐업하기로 결정하자 지역에서 반대 여론이 거세게 일고 있다. 176명의 노조원이 가입한 보건의료산업노조 진주의료원지부는 의료원 폐업 결정이 내려진 직후 긴급 대책회의를 열고 27일 민주노총과 함께 경남도청을 항의 방문했다. 박성용 진주의료원 지부장은 “수천억원의 적자를 낸 경남개발공사는 그대로 두고 200여억원 적자인 진주의료원을 폐업하려는 도의 결정은 취소돼야 한다”며 “공공의료기관으로 수행해왔던 역할은 무시하고 만성 적자를 이유로 문을 닫는 것은 있을 수 없다”고 말했다. 이에 대해 경남도 측은 “매년 수십억원씩 적자를 내고 있어 폐업할 수밖에 없다”고 밝혔다.환자와 가족, 시민들도 폐업 결정에 당황스럽다는 반응이다. 입원한 아내를 간호하고 있는 유한명 씨(70)는 “생활보호대상자인 중증 환자들이 일반 병원보다 싼 덕분에 치료를 받고 있는데 폐업하려는 것은 환자 보고 죽으라는 것”이라고 토로했다. 진주시 하대동에 사는 박상호 씨(45)는 “사람의 목숨을 다루는 병원에서 자본의 논리에 얽매이는 현실이 안타깝다”고 말했다. 진주의료원이 폐업하면 203명의 환자 이송과 233명의 종사자 재취업 문제도 해결해야 할 과제다. 도는 환자에 대해 자발적 퇴원과 지역 내 인근 병원 이송을 추진하고 직원은 자진 퇴사와 이직을 유도할 방침이다. 한편 도는 이날 남해도립대와 거창도립대를 경남도립대(가칭)로, 문화재단·문화콘텐츠진흥원·영상위원회는 경남문화예술진흥원(가칭)으로 통합한다고 밝혔다. pipeline_tag: sentence-similarity library_name: sentence-transformers --- # SentenceTransformer based on intfloat/multilingual-e5-large-instruct This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct) <!-- at revision c9e87c786ffac96aeaeb42863276930883923ecb --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ '국방부 장관이 제시한 전작권 반환시기는 몇 년도인가?', '한국이 미국에 2015년으로 예정된 전시작전통제권 전환 시기를 다시 연기하자고 제안한 것으로 알려졌다.미국 국방부 고위 당국자는 17일 김관진 국방부 장관이 척 헤이글 국방장관에게 최근 전작권 전환의 재연기를 제안해 양국 정부가 이 문제를 협의하고 있다고 연합뉴스에 밝혔다. 이에 대해 한국 국방부 관계자는 “2013년 전반기에 심각해진 북한 핵 문제 등 안보 상황을 중요한 조건으로 고려하면서 전작권 전환 준비를 점검해 나가자고 미국 측에 제의해 한·미 간 논의 중에 있다”고 말했다. 그는 이어 “전작권 전환은 향후 한·미 안보협의회(SCM), 군사위원회의(MCM) 등을 통해 지속적으로 협의해 나갈 것”이라고 말했다.전작권 재연기론의 배경에는 북한이 지난해 말부터 핵실험을 강행하는 등 대남 전쟁위협 수위를 급격하게 높인 것을 꼽을 수 있다. 북한은 지난 2월 3차 핵실험 이후 정전협정을 백지화하겠다고 위협한 데 이어 ‘1호 전투근무태세’ 명령을 내리는 등 위협 강도를 끌어올렸다.정부가 전작권 전환 재연기를 제의한 시기가 지난 3월이며 김 장관이 샹그릴라 대화에서 헤이글 국방장관에게 전환 시기의 연기를 제의했다는 관측이 제기되기도 했다. 정부의 한 고위 관계자는 “올해 초 북한의 위협이 계속되는 등 남북관계 상황을 고려하지 않을 수 없었다”고 말해 전작권 전환시기의 재연기를 제의했음을 시사했다. 한·미 양국은 2006년 전작권을 2013년 전환하기로 합의한 뒤 2010년에 전환 시기를 2015년으로 연기했다.', "1982년 숭의여자고등학교를 졸업하고 창단팀 신용보증기금 농구단에 입단하였다. 숭의여고 시절 '초고교급 가드'로 일찌감치 인정받아 명문팀과의 계약이 유력시 되었지만 '여자 농구의 대모' 박신자를 감독으로 추대하고 고교 유망주들을 적극적으로 영입하는 등 다음 해 시작될 농구대잔치를 위해 적극적인 스카웃 노력을 기울인 신용보증기금과 결국 계약이 이루어졌다. 1982년 10월 필리핀 마닐라에서 열린 아시아 청소년 여자 농구 선수권 대회를 위한 청소년 대표팀에 선발되어 주전 가드로서 활약하였으나 대한민국팀은 중국에 밀려 은메달에 그쳤다. \n\n1984년 5월 쿠바 아바나에서 열린 프레올림픽에 처음 국가대표로 선발되었으며 1984년 LA 올림픽에서는 백업 가드로서 미국과의 결승전 등 세 경기에 교체 출장하며 은메달을 거머쥐었다. 그해 10월에는 중국 상하이에서 열린 아시아 여자 농구 선수권 대회에도 참가, 인도 전에서 최애영을 대신하여 베스트 5로 선발 출장 하는 등 백업 가드로 활약하며 대한민국팀의 대회 4연패에 일조하였다.\n\n1986년에도 국가대표팀에 선발되어 세계 여자 농구 선수권 대회와 아시안게임에 출전하지만 '만년하위팀' 의 오명을 극복하지 못한 소속팀의 부진으로 인해 대중들의 조명을 크게 받지는 못하였다. 1987-88 시즌 농구대잔치에서 국가대표 구정희와 함께 황금 가드 콤비를 이루며 잠시 돌풍을 일으키기도 했지 신생팀의 핸디캡과 포스트진의 부재로 우승권에 근접하지 못하고 1988-89 시즌을 끝으로 은퇴를 선언하였다.", ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 17,552 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 1000 samples: | | sentence_0 | sentence_1 | |:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 9 tokens</li><li>mean: 18.99 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 455.66 tokens</li><li>max: 512 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>무대의 공감각적 이미지를 살리기 위해 사용한 관악기는?</code> | <code>무대는 끊임없이 관객의 상상력을 자극한다. 비스듬히 경사진 사각 나무판 무대에서 배우들이 맨발로 움직인다. 새의 몸짓으로 역동적인 삼각 군무를 펼치다 원을 그리며 빙글빙글 뛰어다니기도 한다. 새의 영역이던 무대는 점점 기울어져 거대한 성벽이 됐다가 다시 완만해져 위대한 새의 나라 ‘조국(鳥國)’의 안마당으로 변한다. 국립극단이 기획한 ‘아리스토파네스 희극 3부작’ 시리즈의 마지막 무대로 서울 서계동 백성희장민호극장에서 공연 중인 연극 ‘새’(윤조병 극본, 윤시중 연출)는 단출하고 현대적인 무대·언어 미학으로 고전 희극을 풀어낸다. 시리즈 전작인 ‘개구리’ ‘구름’과는 사뭇 다르다. 원작이 쓰여진 2500여년 전 그리스 아테네 상황과 ‘오늘의 한국’을 애써 꿰맞추려 하지 않는다. 공연은 원작의 뼈대와 구성은 그대로 살리되 내용은 과감히 줄이면서 조금씩 윤색해 인물과 결말을 살짝 비틀었다. 인물들의 대사는 간결하고 쉽다. 어렵거나 추상적 표현은 전혀 없이 일상에서 살아 숨 쉬는 언어들을 툭툭 리듬에 맞춰 던진다. 원작이나 전작들처럼 장황하게 늘어놓거나 묘사하지 않는다. 극이 주로 ‘새의 나라’에서 벌어지는 만큼 날개 단 인간들이 ‘새대가리’라고 놀리는 새의 수준에 맞춘 것 같다. 그래서 더 웃기고 재미있고, 뭔가 상상하게 만든다.빚을 지고 현실세계에서 도망친 ‘교활 덩어리’ 피스가 자리와 상황 변화에 따라 시시각각으로 변하는 모습을 지켜보는 재미가 쏠쏠하다. 원작에선 남성인 피스가 여성으로 나오는 것도 흥미롭다. 여생을 편안하게 보낼 수 있는 곳을 찾던 피스는 인간과 신들의 세계를 좌지우지할 수 있는 ‘조국’을 구상하고 건설하는 지도자가 되고, 다시 왕에 오르면서 탐욕과 권력욕에 물든다. ‘새의 나라’에 만족하지 못하고 신의 세계까지 올라가 천상을 지배하려던 피스는 신이 된 듯한 착각에 빠져 그만 날개를 스스로 떼어버리고 추락한다. 원작의 해피엔딩과는 달리 극은 유토피아에 대한 인간의 헛된 꿈과 끝을 모르는 욕심의 종착점을 직설적으로 제시한다. 새의 특성을 분장과 의상, 몸짓으로 보여주는 배우들이 나무판 무대를 타거나 넘거나 뚫거나 휘돌며 극을 만든다. 플루트와 타악기가 어우러져 빚어내는 신비롭고 매력적인 음악이 무대에 입혀져 공감각적 이미지를 만들어낸다. 흥겹고 즐거운 놀이와 환상의 연극성이 충만한 무대다. 공연은 내달 3일까지. 1만~3만원.</code> | | <code>올해 창립 25주년을 맞는 공공연구소는?</code> | <code>삼성그룹 계열 연구기관인 삼성경제연구소(SERI)가 올해로 창립 25주년을 맞는다. 1991년 그룹 내부 연구소로 출발해 연 매출 1600억원 이상을 올리는 국내 최대 민간연구소가 됐다. 한때 ‘세리CEO’, ‘세리 인포메이션’ 등 유료 콘텐츠를 통해 민간연구소 업계에 ‘지식으로 돈 버는 모델’을 제시했던 이 연구소는 최근 컨설팅 회사로 빠르게 변신 중이다. 최근 5년 새 연구인력을 50명 늘렸고 삼성SDS, 삼성중공업 등 계열사 사업 재편의 방향도 이곳에서 조언한다. 맥킨지 등 외부 컨설팅업체에서 조언을 받던 삼성 계열사들은 사업 재편 등 핵심 사안에 대한 컨설팅 용역을 삼성경제연구소에 맡기는 추세다.○‘지식기업’ 꿈꿨던 SERI삼성경제연구소(사장 정기영·사진)는 1986년 삼성생명 부속 조직으로 출발해 1991년 그룹 연구조직으로 확대 개편됐다. 삼성전자 등 그룹 내 주요 계열사 대상 연구용역과 임직원 재교육을 주로 담당해왔다. ‘돈 버는 일’보다 ‘경영 자문’이 이 연구소의 주된 역할이었다. 그러던 1998년, 삼성경제연구소는 변신에 나섰다. 삼성그룹 고위 임원에게 제공하던 내부 콘텐츠인 세리CEO를 외부에 개방하기 시작했다. 세리CEO는 최신 경영 트렌드, 경제동향, 산업·기술 변화, 인문학, 매니지먼트, 리더십, 철학, 문학, 스포츠 등을 동영상 등 멀티미디어 콘텐츠로 제공하는 ‘통섭형’ 지식상품이다. 제공 콘텐츠는 1만2000여건이다. 삼성경제연구소는 세리CEO 콘텐츠 제공 대가로 100만원이 넘는 연회비를 받았다. ‘지식으로 돈을 버는’ 수익형 연구소로 탈바꿈한 것. 비싼 회비에도 세리CEO의 인기는 뜨거웠다. 외부 개방 첫해부터 기업, 교수, 관료 등 오피니언리더들의 가입이 줄을 이었다. 120만~150만원을 내는 개인·단체 유료회원은 1만3300여명(2014년 기준). 여기에 국방부와 일선 학교 등 콘텐츠를 일괄 제공받는 준회원을 합하면 30만여명에 달한다.실적도 좋았다. 세리CEO의 매출과 영업이익은 2011년 각각 206억원과 93억원, 2012년 각각 190억원과 87억원을 기록했다. 세리CEO 인기 덕분에 삼성경제연구소 매출(연구용역+인력교육)도 급증했다. 2001년 382억원이던 매출은 2013년 1660억원으로 4배가량 늘었다. 2013년 매출은 경쟁사인 LG경제연구원의 2.2배, 현대경제연구원의 6.7배에 달한다.○계열사 경영자문…삼성의 ‘컨설팅 펌’세리CEO를 내세워 잘나가던 삼성경제연구소는 2013년 또 한 번 변신을 시도했다. 2012년 자회사로 떼어낸 세리CEO를 이듬해 11월 그룹 계열사인 크레듀에 전격 매각했다. 비슷한 시기 삼성경제연구소는 매년 하반기 외부에 공개해왔던 성장률·환율·유가 동향 등을 담은 ‘경제 전망’ 발표도 중단했다. 그룹 관계자는 “지식콘텐츠 사업은 크레듀로 일원화하고 삼성경제연구소는 컨설팅 전문조직으로 바꾸기 위한 시도”라고 설명했다.외부 콘텐츠 제공사업을 전면 중단한 삼성경제연구소는 내부 컨설팅 전문조직으로 탈바꿈했다. 우선 2009년 100여명이던 연구인력을 작년 말 150여명으로 늘렸다. LG경제연구원(103명), 현대경제연구원(50명)과 비교하면 월등히 많은 인력 규모다. 다음달 건설·엔지니어링, 광고·호텔·식음료 등 서비스 부문 연구인력 10여명을 추가 채용하는 등 연구조직을 계속 확충한다는 계획이다.계열사 컨설팅 업무 비중도 크게 늘었다. 2013년 그룹 계열사에 대한 경영자문으로 올린 매출은 778억원으로 전년(2012년) 대비 100억원 가까이 늘었다. 경영자문과 함께 인력 재교육을 해주고서 올린 매출(2013년 기준)도 삼성전자 811억원, 삼성디스플레이 117억원, 삼성물산 81억원 등에 달한다. 재계 관계자는 “삼성그룹이 2013년부터 추진한 계열사 구조조정의 상당수가 삼성경제연구소 컨설팅을 받아 진행된 것들”이라며 “(삼성경제연구소가) ‘미래 삼성’의 방향성을 제시할 두뇌 조직으로 변신하고 있다”고 설명했다.</code> | | <code>동부와의 인수합병을 찬성하는 사람은?</code> | <code>“그동안 380억원을 투자해서 못해도 400억원 이상은 받아야 한다.”(동부그룹)“앞으로 들어갈 돈이 최소한 80억원이어서 290억원 이상은 안된다.”(화성그린팜)동부그룹이 경기 화성에 지은 토마토용 유리온실 매각 작업이 표류하고 있다. 당초 지난달 말까지 본계약을 맺기로 했지만 사는 쪽과 파는 쪽의 눈높이가 달라 이견이 좁혀지지 않고 있다.양측의 의견 차이가 가장 큰 부문은 가격. 유리온실을 매각하려는 동부그룹은 400억원 이상은 받아야 한다고 주장한다. 2010년 7월부터 작년 말까지 화성에 아시아 최대 규모(15만㎡)의 유리온실을 완공하는 데 380억원이 들었기 때문이다. 반면 유리온실을 인수하려는 화성그린팜은 290억원 이상 줄 수 없다고 맞서고 있다. 유리온실 인수 후 시설을 보수하고 토마토 경작을 정상화하는 데 80억원가량이 더 들 것으로 보고 있어서다. 화성그린팜은 화성지역 12개 농협과 5개 화성시 농민단체, 1개 영농법인 등으로 구성돼 있다.화성그린팜은 또 동부가 보유한 유리온실 지분(68.4%) 외에 나머지 지분도 모두 넘길 것을 인수 조건으로 내세우고 있다. 남기철 화성그린팜 회장은 “동부 외에 누가 주주인지도 모르는데 어떻게 같이 사업을 할 수 있느냐”며 “동부가 2대 주주로 들어오든지 아니면 지분 100%를 다 넘겨야 한다”고 말했다. 동부가 유리온실 예비협상대상자를 선정하지 않은 것도 매각 작업이 늦어지는 요인으로 지적되고 있다. 동부는 작년 말 유리온실을 완공한 뒤 이곳에서 수확한 토마토를 전량 수출하겠다고 했지만 농민들의 불매운동에 부딪혀 지난 3월 사업을 포기하고 유리온실을 매각하기로 했다. 지난 6월 화성그린팜과 양해각서(MOU)를 맺고 당초 9월 말까지 협상을 끝내기로 했다가 10월 말로 한 차례 연장했다.</code> | * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters: ```json { "scale": 20.0, "similarity_fct": "cos_sim" } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `num_train_epochs`: 1 - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: no - `prediction_loss_only`: True - `per_device_train_batch_size`: 16 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 1 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | |:------:|:----:|:-------------:| | 0.4558 | 500 | 0.1965 | | 0.9116 | 1000 | 0.0956 | ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.45.2 - PyTorch: 2.5.1+cu121 - Accelerate: 1.1.1 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
optimum-internal-testing/tiny-random-flux
optimum-internal-testing
2024-11-26T12:13:53Z
23,200
2
diffusers
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:FluxPipeline", "region:us" ]
text-to-image
2024-10-23T00:01:18Z
--- library_name: diffusers --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kaitchup/Qwen2.5-72B-Instruct-AutoRound-GPTQ-2bit
kaitchup
2024-11-26T12:10:50Z
25
4
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "auto-gptq", "AutoRound", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "2-bit", "gptq", "region:us" ]
text-generation
2024-11-12T20:18:09Z
--- language: - en library_name: transformers tags: - auto-gptq - AutoRound license: apache-2.0 --- ## Model Details This is [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main) (symmetric quantization) and serialized with the GPTQ format in 2-bit. The model has been created, tested, and evaluated by The Kaitchup. Details on the quantization process and how to use the model here: [The Recipe for Extremely Accurate and Cheap Quantization of 70B+ LLMs](https://kaitchup.substack.com/p/the-recipe-for-extremely-accurate-quantization) It is possible to fine-tune an adapter on top of it following the QLoRA methodology. More about this here: [QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU](https://newsletter.kaitchup.com/p/qlora-with-autoround-cheaper-and) - **Developed by:** [The Kaitchup](https://newsletter.kaitchup.com/) - **Language(s) (NLP):** English - **License:** Apache 2.0 license
kaitchup/Qwen2.5-72B-Instruct-AutoRound-GPTQ-4bit
kaitchup
2024-11-26T12:10:42Z
9
6
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "auto-gptq", "AutoRound", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "gptq", "region:us" ]
text-generation
2024-11-12T20:25:43Z
--- language: - en library_name: transformers tags: - auto-gptq - AutoRound license: apache-2.0 --- ## Model Details This is [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) quantized with [AutoRound](https://github.com/intel/auto-round/tree/main) (symmetric quantization) and serialized with the GPTQ format in 4-bit. The model has been created, tested, and evaluated by The Kaitchup. Details on the quantization process and how to use the model here: [The Recipe for Extremely Accurate and Cheap Quantization of 70B+ LLMs](https://kaitchup.substack.com/p/the-recipe-for-extremely-accurate-quantization) It is possible to fine-tune an adapter on top of it following the QLoRA methodology. More about this here: [QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU](https://newsletter.kaitchup.com/p/qlora-with-autoround-cheaper-and) - **Developed by:** [The Kaitchup](https://newsletter.kaitchup.com/) - **Language(s) (NLP):** English - **License:** Apache 2.0 license
codelion/MathCoT
codelion
2024-11-26T12:08:10Z
23
2
transformers
[ "transformers", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T10:44:57Z
--- base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** codelion - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kmuhammadrifthy/pegasus-samsum
kmuhammadrifthy
2024-11-26T12:04:56Z
104
0
transformers
[ "transformers", "tensorboard", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:samsum", "base_model:google/pegasus-cnn_dailymail", "base_model:finetune:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-26T11:19:33Z
--- library_name: transformers base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4837 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6663 | 0.5430 | 500 | 1.4837 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
RylanSchaeffer/collapse_gemma-2-27b_hs2_replace_iter2_sftsd1
RylanSchaeffer
2024-11-26T12:02:11Z
6
0
null
[ "safetensors", "gemma2", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2-27b", "base_model:finetune:google/gemma-2-27b", "license:gemma", "region:us" ]
null
2024-11-26T11:51:53Z
--- license: gemma base_model: google/gemma-2-27b tags: - trl - sft - generated_from_trainer model-index: - name: collapse_gemma-2-27b_hs2_replace_iter2_sftsd1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # collapse_gemma-2-27b_hs2_replace_iter2_sftsd1 This model is a fine-tuned version of [google/gemma-2-27b](https://huggingface.co/google/gemma-2-27b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1843 - Num Input Tokens Seen: 3808768 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-06 - train_batch_size: 4 - eval_batch_size: 16 - seed: 1 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:-----------------:| | No log | 0 | 0 | 1.1282 | 0 | | 2.5155 | 0.0608 | 5 | 1.0474 | 236020 | | 2.3221 | 0.1216 | 10 | 1.0643 | 471208 | | 1.8872 | 0.1824 | 15 | 1.0913 | 707824 | | 1.5782 | 0.2432 | 20 | 1.1446 | 943300 | | 1.3696 | 0.3040 | 25 | 1.1695 | 1175352 | | 1.1143 | 0.3647 | 30 | 1.1811 | 1412980 | | 1.1623 | 0.4255 | 35 | 1.1684 | 1635940 | | 1.235 | 0.4863 | 40 | 1.1777 | 1866292 | | 1.1213 | 0.5471 | 45 | 1.1692 | 2096140 | | 1.125 | 0.6079 | 50 | 1.1775 | 2327208 | | 1.0627 | 0.6687 | 55 | 1.1690 | 2561988 | | 0.9847 | 0.7295 | 60 | 1.1899 | 2784360 | | 1.0474 | 0.7903 | 65 | 1.1640 | 3017608 | | 0.9585 | 0.8511 | 70 | 1.1777 | 3249220 | | 0.9715 | 0.9119 | 75 | 1.1700 | 3485472 | | 1.0036 | 0.9726 | 80 | 1.1909 | 3722208 | ### Framework versions - Transformers 4.44.0 - Pytorch 2.4.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1
PEGurevich/detr-finetuned-balloon-v2-noaug-scheduled
PEGurevich
2024-11-26T11:59:19Z
193
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-11-26T11:59:10Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
yunusserhat/unsloth_finetune_Qwen2
yunusserhat
2024-11-26T11:57:27Z
9
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-26T11:52:18Z
--- base_model: unsloth/qwen2-vl-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_vl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** yunusserhat - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-vl-7b-instruct-bnb-4bit This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF
ggml-org
2024-11-26T11:57:20Z
1,774
3
transformers
[ "transformers", "gguf", "code", "qwen", "qwen-coder", "codeqwen", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2.5-Coder-3B", "base_model:quantized:Qwen/Qwen2.5-Coder-3B", "license:other", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-11-26T11:56:07Z
--- license: other license_name: qwen-research license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-3B/blob/main/LICENSE language: - en base_model: Qwen/Qwen2.5-Coder-3B pipeline_tag: text-generation library_name: transformers tags: - code - qwen - qwen-coder - codeqwen - llama-cpp - gguf-my-repo --- # ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF --hf-file qwen2.5-coder-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF --hf-file qwen2.5-coder-3b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF --hf-file qwen2.5-coder-3b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF --hf-file qwen2.5-coder-3b-q8_0.gguf -c 2048 ```
tedad09/DuplicateCrossEncoder-FirstTrain-10Epochs
tedad09
2024-11-26T11:48:42Z
118
0
transformers
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "cross-encoder", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T11:47:32Z
--- library_name: transformers tags: - cross-encoder --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
PrunaAI/MrRobotoAI-Freyja-v4.95a-7b-NON-FICTION-bnb-8bit-smashed
PrunaAI
2024-11-26T11:46:05Z
5
0
null
[ "safetensors", "mistral", "pruna-ai", "8-bit", "bitsandbytes", "region:us" ]
null
2024-11-26T11:38:28Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: MrRobotoAI/Freyja-v4.95a-7b-NON-FICTION metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with llm-int8. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo MrRobotoAI/Freyja-v4.95a-7b-NON-FICTION installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/MrRobotoAI-Freyja-v4.95a-7b-NON-FICTION-bnb-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("MrRobotoAI/Freyja-v4.95a-7b-NON-FICTION") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model MrRobotoAI/Freyja-v4.95a-7b-NON-FICTION before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
PrunaAI/Deev124-hermes-llama3-roleplay-1000-v8-bnb-8bit-smashed
PrunaAI
2024-11-26T11:44:27Z
5
0
null
[ "safetensors", "llama", "pruna-ai", "base_model:Deev124/hermes-llama3-roleplay-1000-v8", "base_model:quantized:Deev124/hermes-llama3-roleplay-1000-v8", "8-bit", "bitsandbytes", "region:us" ]
null
2024-11-26T11:34:25Z
--- thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg" base_model: Deev124/hermes-llama3-roleplay-1000-v8 metrics: - memory_disk - memory_inference - inference_latency - inference_throughput - inference_CO2_emissions - inference_energy_consumption tags: - pruna-ai --- <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://docs.pruna.ai/en/latest/setup/pip.html" target="_blank" rel="noopener noreferrer"> <img src="https://imgur.com/rVAgqMY.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with llm-int8. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Deev124/hermes-llama3-roleplay-1000-v8 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install transformers accelerate bitsandbytes>0.37.0 ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/Deev124-hermes-llama3-roleplay-1000-v8-bnb-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Deev124/hermes-llama3-roleplay-1000-v8") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Deev124/hermes-llama3-roleplay-1000-v8 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Do it by yourself [here](https://docs.pruna.ai/en/latest/setup/pip.html).
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task1_organization_fold0
MayBashendy
2024-11-26T11:42:57Z
183
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T11:15:03Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task1_organization_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k40_task1_organization_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5051 - Qwk: 0.3717 - Mse: 1.5051 - Rmse: 1.2268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0102 | 2 | 4.5510 | -0.0993 | 4.5510 | 2.1333 | | No log | 0.0203 | 4 | 2.5560 | 0.0764 | 2.5560 | 1.5987 | | No log | 0.0305 | 6 | 2.0952 | -0.0914 | 2.0952 | 1.4475 | | No log | 0.0406 | 8 | 2.2067 | -0.1318 | 2.2067 | 1.4855 | | No log | 0.0508 | 10 | 2.5695 | -0.1464 | 2.5695 | 1.6030 | | No log | 0.0609 | 12 | 1.7913 | -0.1205 | 1.7913 | 1.3384 | | No log | 0.0711 | 14 | 1.3811 | 0.0233 | 1.3811 | 1.1752 | | No log | 0.0812 | 16 | 1.3412 | 0.0 | 1.3412 | 1.1581 | | No log | 0.0914 | 18 | 1.2861 | 0.0 | 1.2861 | 1.1341 | | No log | 0.1015 | 20 | 1.2485 | 0.0 | 1.2485 | 1.1174 | | No log | 0.1117 | 22 | 1.2665 | 0.0 | 1.2665 | 1.1254 | | No log | 0.1218 | 24 | 1.2638 | 0.0 | 1.2638 | 1.1242 | | No log | 0.1320 | 26 | 1.2530 | 0.0 | 1.2530 | 1.1194 | | No log | 0.1421 | 28 | 1.2674 | 0.0 | 1.2674 | 1.1258 | | No log | 0.1523 | 30 | 1.3591 | 0.0 | 1.3591 | 1.1658 | | No log | 0.1624 | 32 | 1.3930 | -0.0107 | 1.3930 | 1.1803 | | No log | 0.1726 | 34 | 1.2974 | -0.0479 | 1.2974 | 1.1390 | | No log | 0.1827 | 36 | 1.2310 | 0.0275 | 1.2310 | 1.1095 | | No log | 0.1929 | 38 | 1.1690 | 0.1233 | 1.1690 | 1.0812 | | No log | 0.2030 | 40 | 1.2112 | 0.1402 | 1.2112 | 1.1006 | | No log | 0.2132 | 42 | 1.3930 | 0.1499 | 1.3930 | 1.1802 | | No log | 0.2234 | 44 | 1.2702 | 0.1080 | 1.2702 | 1.1270 | | No log | 0.2335 | 46 | 1.2063 | 0.1127 | 1.2063 | 1.0983 | | No log | 0.2437 | 48 | 1.1720 | 0.0463 | 1.1720 | 1.0826 | | No log | 0.2538 | 50 | 1.2400 | 0.0737 | 1.2400 | 1.1135 | | No log | 0.2640 | 52 | 1.2941 | 0.0607 | 1.2941 | 1.1376 | | No log | 0.2741 | 54 | 1.3230 | 0.0 | 1.3230 | 1.1502 | | No log | 0.2843 | 56 | 1.3231 | 0.0 | 1.3231 | 1.1503 | | No log | 0.2944 | 58 | 1.2881 | 0.0 | 1.2881 | 1.1349 | | No log | 0.3046 | 60 | 1.1929 | 0.0607 | 1.1929 | 1.0922 | | No log | 0.3147 | 62 | 1.1886 | 0.0607 | 1.1886 | 1.0902 | | No log | 0.3249 | 64 | 1.1571 | 0.0630 | 1.1571 | 1.0757 | | No log | 0.3350 | 66 | 1.1481 | 0.0393 | 1.1481 | 1.0715 | | No log | 0.3452 | 68 | 1.1528 | 0.0393 | 1.1528 | 1.0737 | | No log | 0.3553 | 70 | 1.0974 | 0.1041 | 1.0974 | 1.0476 | | No log | 0.3655 | 72 | 1.0951 | 0.0760 | 1.0951 | 1.0465 | | No log | 0.3756 | 74 | 1.1213 | 0.0607 | 1.1213 | 1.0589 | | No log | 0.3858 | 76 | 1.2418 | 0.0 | 1.2418 | 1.1144 | | No log | 0.3959 | 78 | 1.4426 | 0.0 | 1.4426 | 1.2011 | | No log | 0.4061 | 80 | 1.5580 | 0.0337 | 1.5580 | 1.2482 | | No log | 0.4162 | 82 | 1.6052 | 0.0337 | 1.6052 | 1.2670 | | No log | 0.4264 | 84 | 1.6465 | 0.0432 | 1.6465 | 1.2832 | | No log | 0.4365 | 86 | 1.6212 | 0.0432 | 1.6212 | 1.2733 | | No log | 0.4467 | 88 | 1.5003 | 0.0337 | 1.5003 | 1.2249 | | No log | 0.4569 | 90 | 1.4188 | 0.0 | 1.4188 | 1.1911 | | No log | 0.4670 | 92 | 1.3674 | 0.0 | 1.3674 | 1.1693 | | No log | 0.4772 | 94 | 1.3925 | 0.0 | 1.3925 | 1.1801 | | No log | 0.4873 | 96 | 1.3511 | 0.0 | 1.3511 | 1.1624 | | No log | 0.4975 | 98 | 1.2943 | 0.0 | 1.2943 | 1.1377 | | No log | 0.5076 | 100 | 1.2317 | 0.0 | 1.2317 | 1.1098 | | No log | 0.5178 | 102 | 1.1426 | 0.0 | 1.1426 | 1.0689 | | No log | 0.5279 | 104 | 1.0734 | 0.1579 | 1.0734 | 1.0360 | | No log | 0.5381 | 106 | 1.0684 | 0.1708 | 1.0684 | 1.0336 | | No log | 0.5482 | 108 | 1.1100 | 0.1579 | 1.1100 | 1.0536 | | No log | 0.5584 | 110 | 1.1321 | 0.1471 | 1.1321 | 1.0640 | | No log | 0.5685 | 112 | 1.1178 | 0.1599 | 1.1178 | 1.0572 | | No log | 0.5787 | 114 | 1.0808 | 0.1984 | 1.0808 | 1.0396 | | No log | 0.5888 | 116 | 1.1167 | 0.1579 | 1.1167 | 1.0567 | | No log | 0.5990 | 118 | 1.1899 | 0.1212 | 1.1899 | 1.0908 | | No log | 0.6091 | 120 | 1.1505 | 0.1579 | 1.1505 | 1.0726 | | No log | 0.6193 | 122 | 1.0910 | 0.1944 | 1.0910 | 1.0445 | | No log | 0.6294 | 124 | 1.0395 | 0.3179 | 1.0395 | 1.0196 | | No log | 0.6396 | 126 | 1.0352 | 0.2691 | 1.0352 | 1.0175 | | No log | 0.6497 | 128 | 1.0132 | 0.2835 | 1.0132 | 1.0066 | | No log | 0.6599 | 130 | 1.0060 | 0.3087 | 1.0060 | 1.0030 | | No log | 0.6701 | 132 | 1.0289 | 0.2130 | 1.0289 | 1.0144 | | No log | 0.6802 | 134 | 1.0374 | 0.2130 | 1.0374 | 1.0186 | | No log | 0.6904 | 136 | 1.0079 | 0.3478 | 1.0079 | 1.0039 | | No log | 0.7005 | 138 | 0.9809 | 0.4082 | 0.9809 | 0.9904 | | No log | 0.7107 | 140 | 0.9597 | 0.4096 | 0.9597 | 0.9797 | | No log | 0.7208 | 142 | 0.9472 | 0.4219 | 0.9472 | 0.9733 | | No log | 0.7310 | 144 | 0.9585 | 0.3602 | 0.9585 | 0.9790 | | No log | 0.7411 | 146 | 0.9384 | 0.3973 | 0.9384 | 0.9687 | | No log | 0.7513 | 148 | 0.9276 | 0.4096 | 0.9276 | 0.9631 | | No log | 0.7614 | 150 | 0.9162 | 0.3771 | 0.9162 | 0.9572 | | No log | 0.7716 | 152 | 0.9183 | 0.3771 | 0.9183 | 0.9583 | | No log | 0.7817 | 154 | 0.9513 | 0.3937 | 0.9513 | 0.9754 | | No log | 0.7919 | 156 | 0.9759 | 0.3965 | 0.9759 | 0.9879 | | No log | 0.8020 | 158 | 0.9800 | 0.3738 | 0.9800 | 0.9899 | | No log | 0.8122 | 160 | 0.9675 | 0.3968 | 0.9675 | 0.9836 | | No log | 0.8223 | 162 | 0.9797 | 0.3820 | 0.9797 | 0.9898 | | No log | 0.8325 | 164 | 0.9877 | 0.3825 | 0.9877 | 0.9938 | | No log | 0.8426 | 166 | 1.0364 | 0.2663 | 1.0364 | 1.0181 | | No log | 0.8528 | 168 | 1.0653 | 0.2663 | 1.0653 | 1.0321 | | No log | 0.8629 | 170 | 1.0019 | 0.2817 | 1.0019 | 1.0009 | | No log | 0.8731 | 172 | 0.9176 | 0.3956 | 0.9176 | 0.9579 | | No log | 0.8832 | 174 | 0.9074 | 0.3730 | 0.9074 | 0.9526 | | No log | 0.8934 | 176 | 0.9621 | 0.3344 | 0.9621 | 0.9808 | | No log | 0.9036 | 178 | 1.0169 | 0.2932 | 1.0169 | 1.0084 | | No log | 0.9137 | 180 | 0.9939 | 0.3019 | 0.9939 | 0.9970 | | No log | 0.9239 | 182 | 0.9326 | 0.2970 | 0.9326 | 0.9657 | | No log | 0.9340 | 184 | 0.9398 | 0.2987 | 0.9398 | 0.9694 | | No log | 0.9442 | 186 | 1.0683 | 0.3402 | 1.0683 | 1.0336 | | No log | 0.9543 | 188 | 1.2241 | 0.2423 | 1.2241 | 1.1064 | | No log | 0.9645 | 190 | 1.2471 | 0.2423 | 1.2471 | 1.1167 | | No log | 0.9746 | 192 | 1.1516 | 0.2830 | 1.1516 | 1.0731 | | No log | 0.9848 | 194 | 1.0208 | 0.3020 | 1.0208 | 1.0104 | | No log | 0.9949 | 196 | 0.9130 | 0.2974 | 0.9130 | 0.9555 | | No log | 1.0051 | 198 | 0.8967 | 0.3026 | 0.8967 | 0.9469 | | No log | 1.0152 | 200 | 0.9792 | 0.3120 | 0.9792 | 0.9895 | | No log | 1.0254 | 202 | 1.1953 | 0.2678 | 1.1953 | 1.0933 | | No log | 1.0355 | 204 | 1.4914 | 0.1750 | 1.4914 | 1.2212 | | No log | 1.0457 | 206 | 1.5783 | 0.1310 | 1.5783 | 1.2563 | | No log | 1.0558 | 208 | 1.5190 | 0.0809 | 1.5190 | 1.2325 | | No log | 1.0660 | 210 | 1.3012 | 0.1833 | 1.3012 | 1.1407 | | No log | 1.0761 | 212 | 1.0466 | 0.2489 | 1.0466 | 1.0230 | | No log | 1.0863 | 214 | 0.9018 | 0.3897 | 0.9018 | 0.9496 | | No log | 1.0964 | 216 | 0.8703 | 0.4309 | 0.8703 | 0.9329 | | No log | 1.1066 | 218 | 0.8702 | 0.4336 | 0.8702 | 0.9328 | | No log | 1.1168 | 220 | 0.9201 | 0.3074 | 0.9201 | 0.9592 | | No log | 1.1269 | 222 | 1.0196 | 0.3003 | 1.0196 | 1.0097 | | No log | 1.1371 | 224 | 1.1648 | 0.2150 | 1.1648 | 1.0792 | | No log | 1.1472 | 226 | 1.2117 | 0.2520 | 1.2117 | 1.1008 | | No log | 1.1574 | 228 | 1.2024 | 0.2837 | 1.2024 | 1.0965 | | No log | 1.1675 | 230 | 1.2237 | 0.2553 | 1.2237 | 1.1062 | | No log | 1.1777 | 232 | 1.1882 | 0.2553 | 1.1882 | 1.0901 | | No log | 1.1878 | 234 | 1.1353 | 0.2636 | 1.1353 | 1.0655 | | No log | 1.1980 | 236 | 1.1527 | 0.2305 | 1.1527 | 1.0737 | | No log | 1.2081 | 238 | 1.1705 | 0.2405 | 1.1705 | 1.0819 | | No log | 1.2183 | 240 | 1.1712 | 0.2558 | 1.1712 | 1.0822 | | No log | 1.2284 | 242 | 1.0982 | 0.3500 | 1.0982 | 1.0479 | | No log | 1.2386 | 244 | 1.0544 | 0.3598 | 1.0544 | 1.0269 | | No log | 1.2487 | 246 | 1.0025 | 0.3443 | 1.0025 | 1.0012 | | No log | 1.2589 | 248 | 0.9563 | 0.4470 | 0.9563 | 0.9779 | | No log | 1.2690 | 250 | 0.9920 | 0.4893 | 0.9920 | 0.9960 | | No log | 1.2792 | 252 | 1.0551 | 0.4737 | 1.0551 | 1.0272 | | No log | 1.2893 | 254 | 1.1089 | 0.4460 | 1.1089 | 1.0530 | | No log | 1.2995 | 256 | 1.0993 | 0.4449 | 1.0993 | 1.0485 | | No log | 1.3096 | 258 | 1.0092 | 0.4900 | 1.0092 | 1.0046 | | No log | 1.3198 | 260 | 0.9213 | 0.5170 | 0.9213 | 0.9598 | | No log | 1.3299 | 262 | 0.9043 | 0.5073 | 0.9043 | 0.9510 | | No log | 1.3401 | 264 | 0.9526 | 0.4585 | 0.9526 | 0.9760 | | No log | 1.3503 | 266 | 1.0900 | 0.3494 | 1.0900 | 1.0440 | | No log | 1.3604 | 268 | 1.2096 | 0.2277 | 1.2096 | 1.0998 | | No log | 1.3706 | 270 | 1.2490 | 0.2115 | 1.2490 | 1.1176 | | No log | 1.3807 | 272 | 1.2218 | 0.2830 | 1.2218 | 1.1053 | | No log | 1.3909 | 274 | 1.2030 | 0.3762 | 1.2030 | 1.0968 | | No log | 1.4010 | 276 | 1.1678 | 0.3455 | 1.1678 | 1.0806 | | No log | 1.4112 | 278 | 1.0666 | 0.4160 | 1.0666 | 1.0328 | | No log | 1.4213 | 280 | 1.0370 | 0.3880 | 1.0370 | 1.0183 | | No log | 1.4315 | 282 | 1.1343 | 0.2896 | 1.1343 | 1.0650 | | No log | 1.4416 | 284 | 1.0927 | 0.3269 | 1.0927 | 1.0453 | | No log | 1.4518 | 286 | 1.0585 | 0.3237 | 1.0585 | 1.0288 | | No log | 1.4619 | 288 | 0.9724 | 0.3674 | 0.9724 | 0.9861 | | No log | 1.4721 | 290 | 0.9461 | 0.4003 | 0.9461 | 0.9727 | | No log | 1.4822 | 292 | 0.9824 | 0.4507 | 0.9824 | 0.9911 | | No log | 1.4924 | 294 | 0.9524 | 0.4644 | 0.9524 | 0.9759 | | No log | 1.5025 | 296 | 0.9156 | 0.4708 | 0.9156 | 0.9568 | | No log | 1.5127 | 298 | 0.9710 | 0.4646 | 0.9710 | 0.9854 | | No log | 1.5228 | 300 | 1.1630 | 0.4685 | 1.1630 | 1.0784 | | No log | 1.5330 | 302 | 1.3070 | 0.3513 | 1.3070 | 1.1433 | | No log | 1.5431 | 304 | 1.3179 | 0.3898 | 1.3179 | 1.1480 | | No log | 1.5533 | 306 | 1.3805 | 0.3830 | 1.3805 | 1.1750 | | No log | 1.5635 | 308 | 1.2464 | 0.3944 | 1.2464 | 1.1164 | | No log | 1.5736 | 310 | 1.0940 | 0.4588 | 1.0940 | 1.0459 | | No log | 1.5838 | 312 | 1.0793 | 0.4597 | 1.0793 | 1.0389 | | No log | 1.5939 | 314 | 1.2218 | 0.4148 | 1.2218 | 1.1053 | | No log | 1.6041 | 316 | 1.4590 | 0.3593 | 1.4590 | 1.2079 | | No log | 1.6142 | 318 | 1.5704 | 0.2969 | 1.5704 | 1.2532 | | No log | 1.6244 | 320 | 1.5104 | 0.3496 | 1.5104 | 1.2290 | | No log | 1.6345 | 322 | 1.4260 | 0.3496 | 1.4260 | 1.1942 | | No log | 1.6447 | 324 | 1.2716 | 0.3798 | 1.2716 | 1.1277 | | No log | 1.6548 | 326 | 1.1035 | 0.3834 | 1.1035 | 1.0505 | | No log | 1.6650 | 328 | 1.0430 | 0.4265 | 1.0430 | 1.0213 | | No log | 1.6751 | 330 | 1.0506 | 0.3917 | 1.0506 | 1.0250 | | No log | 1.6853 | 332 | 1.1166 | 0.4035 | 1.1166 | 1.0567 | | No log | 1.6954 | 334 | 1.1533 | 0.4128 | 1.1533 | 1.0739 | | No log | 1.7056 | 336 | 1.1665 | 0.4042 | 1.1665 | 1.0801 | | No log | 1.7157 | 338 | 1.2274 | 0.3751 | 1.2274 | 1.1079 | | No log | 1.7259 | 340 | 1.3234 | 0.3976 | 1.3234 | 1.1504 | | No log | 1.7360 | 342 | 1.4047 | 0.3752 | 1.4047 | 1.1852 | | No log | 1.7462 | 344 | 1.3366 | 0.3635 | 1.3366 | 1.1561 | | No log | 1.7563 | 346 | 1.2862 | 0.3420 | 1.2862 | 1.1341 | | No log | 1.7665 | 348 | 1.2130 | 0.3650 | 1.2130 | 1.1014 | | No log | 1.7766 | 350 | 1.1833 | 0.4522 | 1.1833 | 1.0878 | | No log | 1.7868 | 352 | 1.2088 | 0.4128 | 1.2088 | 1.0995 | | No log | 1.7970 | 354 | 1.1770 | 0.4128 | 1.1770 | 1.0849 | | No log | 1.8071 | 356 | 1.1668 | 0.4107 | 1.1668 | 1.0802 | | No log | 1.8173 | 358 | 1.1440 | 0.4085 | 1.1440 | 1.0696 | | No log | 1.8274 | 360 | 1.2064 | 0.3346 | 1.2064 | 1.0984 | | No log | 1.8376 | 362 | 1.2050 | 0.3382 | 1.2050 | 1.0977 | | No log | 1.8477 | 364 | 1.1613 | 0.4561 | 1.1613 | 1.0777 | | No log | 1.8579 | 366 | 1.0997 | 0.4900 | 1.0997 | 1.0487 | | No log | 1.8680 | 368 | 1.1180 | 0.4900 | 1.1180 | 1.0574 | | No log | 1.8782 | 370 | 1.3173 | 0.3820 | 1.3173 | 1.1478 | | No log | 1.8883 | 372 | 1.5174 | 0.3175 | 1.5174 | 1.2318 | | No log | 1.8985 | 374 | 1.4874 | 0.3295 | 1.4874 | 1.2196 | | No log | 1.9086 | 376 | 1.3152 | 0.3746 | 1.3152 | 1.1468 | | No log | 1.9188 | 378 | 1.1137 | 0.3865 | 1.1137 | 1.0553 | | No log | 1.9289 | 380 | 1.0631 | 0.3865 | 1.0631 | 1.0311 | | No log | 1.9391 | 382 | 1.0271 | 0.4295 | 1.0271 | 1.0135 | | No log | 1.9492 | 384 | 1.0622 | 0.4412 | 1.0622 | 1.0306 | | No log | 1.9594 | 386 | 1.1860 | 0.3298 | 1.1860 | 1.0891 | | No log | 1.9695 | 388 | 1.2765 | 0.3586 | 1.2765 | 1.1298 | | No log | 1.9797 | 390 | 1.3655 | 0.3723 | 1.3655 | 1.1685 | | No log | 1.9898 | 392 | 1.3462 | 0.3974 | 1.3462 | 1.1603 | | No log | 2.0 | 394 | 1.3277 | 0.3843 | 1.3277 | 1.1523 | | No log | 2.0102 | 396 | 1.2532 | 0.4120 | 1.2532 | 1.1195 | | No log | 2.0203 | 398 | 1.1848 | 0.4323 | 1.1848 | 1.0885 | | No log | 2.0305 | 400 | 1.0825 | 0.4924 | 1.0825 | 1.0404 | | No log | 2.0406 | 402 | 1.0649 | 0.4924 | 1.0649 | 1.0319 | | No log | 2.0508 | 404 | 1.1044 | 0.4686 | 1.1044 | 1.0509 | | No log | 2.0609 | 406 | 1.1672 | 0.4468 | 1.1672 | 1.0804 | | No log | 2.0711 | 408 | 1.2572 | 0.4370 | 1.2572 | 1.1212 | | No log | 2.0812 | 410 | 1.2205 | 0.4290 | 1.2205 | 1.1048 | | No log | 2.0914 | 412 | 1.1423 | 0.4940 | 1.1423 | 1.0688 | | No log | 2.1015 | 414 | 1.1749 | 0.4411 | 1.1749 | 1.0839 | | No log | 2.1117 | 416 | 1.3271 | 0.3830 | 1.3271 | 1.1520 | | No log | 2.1218 | 418 | 1.3558 | 0.3955 | 1.3558 | 1.1644 | | No log | 2.1320 | 420 | 1.3175 | 0.4005 | 1.3175 | 1.1478 | | No log | 2.1421 | 422 | 1.2097 | 0.4490 | 1.2097 | 1.0999 | | No log | 2.1523 | 424 | 1.1988 | 0.4468 | 1.1988 | 1.0949 | | No log | 2.1624 | 426 | 1.2106 | 0.4197 | 1.2106 | 1.1003 | | No log | 2.1726 | 428 | 1.2118 | 0.4197 | 1.2118 | 1.1008 | | No log | 2.1827 | 430 | 1.1436 | 0.4275 | 1.1436 | 1.0694 | | No log | 2.1929 | 432 | 1.1132 | 0.4431 | 1.1132 | 1.0551 | | No log | 2.2030 | 434 | 1.1546 | 0.4519 | 1.1546 | 1.0745 | | No log | 2.2132 | 436 | 1.2196 | 0.4072 | 1.2196 | 1.1043 | | No log | 2.2234 | 438 | 1.2846 | 0.4017 | 1.2846 | 1.1334 | | No log | 2.2335 | 440 | 1.2943 | 0.4007 | 1.2943 | 1.1377 | | No log | 2.2437 | 442 | 1.2188 | 0.4207 | 1.2188 | 1.1040 | | No log | 2.2538 | 444 | 1.0840 | 0.4479 | 1.0840 | 1.0411 | | No log | 2.2640 | 446 | 1.0353 | 0.4275 | 1.0353 | 1.0175 | | No log | 2.2741 | 448 | 1.0051 | 0.4359 | 1.0051 | 1.0026 | | No log | 2.2843 | 450 | 1.0350 | 0.4737 | 1.0350 | 1.0174 | | No log | 2.2944 | 452 | 1.1542 | 0.4674 | 1.1542 | 1.0743 | | No log | 2.3046 | 454 | 1.2963 | 0.4258 | 1.2963 | 1.1385 | | No log | 2.3147 | 456 | 1.2753 | 0.4120 | 1.2753 | 1.1293 | | No log | 2.3249 | 458 | 1.2892 | 0.4120 | 1.2892 | 1.1354 | | No log | 2.3350 | 460 | 1.2092 | 0.4028 | 1.2092 | 1.0996 | | No log | 2.3452 | 462 | 1.0515 | 0.4818 | 1.0515 | 1.0254 | | No log | 2.3553 | 464 | 1.0140 | 0.4818 | 1.0140 | 1.0070 | | No log | 2.3655 | 466 | 1.1003 | 0.4676 | 1.1003 | 1.0490 | | No log | 2.3756 | 468 | 1.1919 | 0.4531 | 1.1919 | 1.0917 | | No log | 2.3858 | 470 | 1.2402 | 0.4387 | 1.2402 | 1.1136 | | No log | 2.3959 | 472 | 1.1946 | 0.3826 | 1.1946 | 1.0930 | | No log | 2.4061 | 474 | 1.1006 | 0.4376 | 1.1006 | 1.0491 | | No log | 2.4162 | 476 | 1.0937 | 0.4359 | 1.0937 | 1.0458 | | No log | 2.4264 | 478 | 1.0785 | 0.4454 | 1.0785 | 1.0385 | | No log | 2.4365 | 480 | 1.0827 | 0.4161 | 1.0827 | 1.0405 | | No log | 2.4467 | 482 | 1.0407 | 0.4027 | 1.0407 | 1.0201 | | No log | 2.4569 | 484 | 1.0127 | 0.4027 | 1.0127 | 1.0063 | | No log | 2.4670 | 486 | 1.0044 | 0.4336 | 1.0044 | 1.0022 | | No log | 2.4772 | 488 | 1.0388 | 0.4918 | 1.0388 | 1.0192 | | No log | 2.4873 | 490 | 1.1093 | 0.4165 | 1.1093 | 1.0532 | | No log | 2.4975 | 492 | 1.1501 | 0.4746 | 1.1501 | 1.0724 | | No log | 2.5076 | 494 | 1.0241 | 0.4801 | 1.0241 | 1.0120 | | No log | 2.5178 | 496 | 0.8577 | 0.5896 | 0.8577 | 0.9261 | | No log | 2.5279 | 498 | 0.7613 | 0.6083 | 0.7613 | 0.8726 | | 0.5284 | 2.5381 | 500 | 0.7745 | 0.5926 | 0.7745 | 0.8800 | | 0.5284 | 2.5482 | 502 | 0.9147 | 0.5405 | 0.9147 | 0.9564 | | 0.5284 | 2.5584 | 504 | 1.2355 | 0.4111 | 1.2355 | 1.1115 | | 0.5284 | 2.5685 | 506 | 1.5088 | 0.3957 | 1.5088 | 1.2283 | | 0.5284 | 2.5787 | 508 | 1.5334 | 0.3993 | 1.5334 | 1.2383 | | 0.5284 | 2.5888 | 510 | 1.3200 | 0.4391 | 1.3200 | 1.1489 | | 0.5284 | 2.5990 | 512 | 1.0796 | 0.4802 | 1.0796 | 1.0390 | | 0.5284 | 2.6091 | 514 | 0.8952 | 0.5082 | 0.8952 | 0.9461 | | 0.5284 | 2.6193 | 516 | 0.8482 | 0.4982 | 0.8482 | 0.9210 | | 0.5284 | 2.6294 | 518 | 0.8849 | 0.4826 | 0.8849 | 0.9407 | | 0.5284 | 2.6396 | 520 | 1.0210 | 0.5177 | 1.0210 | 1.0104 | | 0.5284 | 2.6497 | 522 | 1.1908 | 0.4812 | 1.1908 | 1.0912 | | 0.5284 | 2.6599 | 524 | 1.3284 | 0.4347 | 1.3284 | 1.1526 | | 0.5284 | 2.6701 | 526 | 1.4459 | 0.4087 | 1.4459 | 1.2025 | | 0.5284 | 2.6802 | 528 | 1.4216 | 0.4087 | 1.4216 | 1.1923 | | 0.5284 | 2.6904 | 530 | 1.3368 | 0.4479 | 1.3368 | 1.1562 | | 0.5284 | 2.7005 | 532 | 1.2565 | 0.4305 | 1.2565 | 1.1209 | | 0.5284 | 2.7107 | 534 | 1.1566 | 0.4378 | 1.1566 | 1.0755 | | 0.5284 | 2.7208 | 536 | 1.0820 | 0.4159 | 1.0820 | 1.0402 | | 0.5284 | 2.7310 | 538 | 1.0833 | 0.4250 | 1.0833 | 1.0408 | | 0.5284 | 2.7411 | 540 | 1.1283 | 0.4250 | 1.1283 | 1.0622 | | 0.5284 | 2.7513 | 542 | 1.1550 | 0.4008 | 1.1550 | 1.0747 | | 0.5284 | 2.7614 | 544 | 1.2436 | 0.3757 | 1.2436 | 1.1152 | | 0.5284 | 2.7716 | 546 | 1.3052 | 0.3615 | 1.3052 | 1.1425 | | 0.5284 | 2.7817 | 548 | 1.3355 | 0.3615 | 1.3355 | 1.1556 | | 0.5284 | 2.7919 | 550 | 1.4014 | 0.3880 | 1.4014 | 1.1838 | | 0.5284 | 2.8020 | 552 | 1.3691 | 0.4037 | 1.3691 | 1.1701 | | 0.5284 | 2.8122 | 554 | 1.2655 | 0.4184 | 1.2655 | 1.1249 | | 0.5284 | 2.8223 | 556 | 1.2621 | 0.4202 | 1.2621 | 1.1234 | | 0.5284 | 2.8325 | 558 | 1.2825 | 0.3955 | 1.2825 | 1.1325 | | 0.5284 | 2.8426 | 560 | 1.3098 | 0.3955 | 1.3098 | 1.1444 | | 0.5284 | 2.8528 | 562 | 1.3047 | 0.3955 | 1.3047 | 1.1422 | | 0.5284 | 2.8629 | 564 | 1.2749 | 0.3762 | 1.2749 | 1.1291 | | 0.5284 | 2.8731 | 566 | 1.2551 | 0.3964 | 1.2551 | 1.1203 | | 0.5284 | 2.8832 | 568 | 1.1753 | 0.3917 | 1.1753 | 1.0841 | | 0.5284 | 2.8934 | 570 | 1.0850 | 0.4139 | 1.0850 | 1.0416 | | 0.5284 | 2.9036 | 572 | 1.0432 | 0.3940 | 1.0432 | 1.0214 | | 0.5284 | 2.9137 | 574 | 1.0815 | 0.3856 | 1.0815 | 1.0400 | | 0.5284 | 2.9239 | 576 | 1.1116 | 0.4078 | 1.1116 | 1.0543 | | 0.5284 | 2.9340 | 578 | 1.0966 | 0.4412 | 1.0966 | 1.0472 | | 0.5284 | 2.9442 | 580 | 1.0872 | 0.4844 | 1.0872 | 1.0427 | | 0.5284 | 2.9543 | 582 | 1.1644 | 0.4307 | 1.1644 | 1.0791 | | 0.5284 | 2.9645 | 584 | 1.3345 | 0.4103 | 1.3345 | 1.1552 | | 0.5284 | 2.9746 | 586 | 1.5136 | 0.3717 | 1.5136 | 1.2303 | | 0.5284 | 2.9848 | 588 | 1.5983 | 0.3966 | 1.5983 | 1.2642 | | 0.5284 | 2.9949 | 590 | 1.5695 | 0.4087 | 1.5695 | 1.2528 | | 0.5284 | 3.0051 | 592 | 1.4819 | 0.3964 | 1.4819 | 1.2174 | | 0.5284 | 3.0152 | 594 | 1.3052 | 0.4051 | 1.3052 | 1.1425 | | 0.5284 | 3.0254 | 596 | 1.1288 | 0.4008 | 1.1288 | 1.0624 | | 0.5284 | 3.0355 | 598 | 1.0386 | 0.4172 | 1.0386 | 1.0191 | | 0.5284 | 3.0457 | 600 | 1.0354 | 0.4172 | 1.0354 | 1.0175 | | 0.5284 | 3.0558 | 602 | 1.1119 | 0.4242 | 1.1119 | 1.0544 | | 0.5284 | 3.0660 | 604 | 1.2471 | 0.4008 | 1.2471 | 1.1167 | | 0.5284 | 3.0761 | 606 | 1.4223 | 0.3697 | 1.4223 | 1.1926 | | 0.5284 | 3.0863 | 608 | 1.5295 | 0.3795 | 1.5295 | 1.2367 | | 0.5284 | 3.0964 | 610 | 1.5405 | 0.3670 | 1.5405 | 1.2412 | | 0.5284 | 3.1066 | 612 | 1.5164 | 0.3670 | 1.5164 | 1.2314 | | 0.5284 | 3.1168 | 614 | 1.4590 | 0.3839 | 1.4590 | 1.2079 | | 0.5284 | 3.1269 | 616 | 1.4316 | 0.3839 | 1.4316 | 1.1965 | | 0.5284 | 3.1371 | 618 | 1.3527 | 0.3907 | 1.3527 | 1.1631 | | 0.5284 | 3.1472 | 620 | 1.2733 | 0.4076 | 1.2733 | 1.1284 | | 0.5284 | 3.1574 | 622 | 1.2046 | 0.4637 | 1.2046 | 1.0975 | | 0.5284 | 3.1675 | 624 | 1.2005 | 0.4188 | 1.2005 | 1.0957 | | 0.5284 | 3.1777 | 626 | 1.2260 | 0.3783 | 1.2260 | 1.1073 | | 0.5284 | 3.1878 | 628 | 1.2783 | 0.3783 | 1.2783 | 1.1306 | | 0.5284 | 3.1980 | 630 | 1.3588 | 0.3924 | 1.3588 | 1.1657 | | 0.5284 | 3.2081 | 632 | 1.4857 | 0.3860 | 1.4857 | 1.2189 | | 0.5284 | 3.2183 | 634 | 1.5522 | 0.3740 | 1.5522 | 1.2459 | | 0.5284 | 3.2284 | 636 | 1.6590 | 0.3623 | 1.6590 | 1.2880 | | 0.5284 | 3.2386 | 638 | 1.6372 | 0.3623 | 1.6372 | 1.2795 | | 0.5284 | 3.2487 | 640 | 1.5306 | 0.3740 | 1.5306 | 1.2372 | | 0.5284 | 3.2589 | 642 | 1.3981 | 0.3907 | 1.3981 | 1.1824 | | 0.5284 | 3.2690 | 644 | 1.3676 | 0.3995 | 1.3676 | 1.1695 | | 0.5284 | 3.2792 | 646 | 1.3771 | 0.3857 | 1.3771 | 1.1735 | | 0.5284 | 3.2893 | 648 | 1.3725 | 0.3671 | 1.3725 | 1.1715 | | 0.5284 | 3.2995 | 650 | 1.3424 | 0.3671 | 1.3424 | 1.1586 | | 0.5284 | 3.3096 | 652 | 1.2752 | 0.3524 | 1.2752 | 1.1293 | | 0.5284 | 3.3198 | 654 | 1.2191 | 0.3757 | 1.2191 | 1.1041 | | 0.5284 | 3.3299 | 656 | 1.2175 | 0.3757 | 1.2175 | 1.1034 | | 0.5284 | 3.3401 | 658 | 1.2266 | 0.3524 | 1.2266 | 1.1075 | | 0.5284 | 3.3503 | 660 | 1.3070 | 0.3757 | 1.3070 | 1.1432 | | 0.5284 | 3.3604 | 662 | 1.4453 | 0.3902 | 1.4453 | 1.2022 | | 0.5284 | 3.3706 | 664 | 1.5205 | 0.3694 | 1.5205 | 1.2331 | | 0.5284 | 3.3807 | 666 | 1.5372 | 0.3575 | 1.5372 | 1.2398 | | 0.5284 | 3.3909 | 668 | 1.4418 | 0.3575 | 1.4418 | 1.2007 | | 0.5284 | 3.4010 | 670 | 1.3302 | 0.3739 | 1.3302 | 1.1533 | | 0.5284 | 3.4112 | 672 | 1.2486 | 0.4305 | 1.2486 | 1.1174 | | 0.5284 | 3.4213 | 674 | 1.2213 | 0.4499 | 1.2213 | 1.1051 | | 0.5284 | 3.4315 | 676 | 1.2322 | 0.4277 | 1.2322 | 1.1100 | | 0.5284 | 3.4416 | 678 | 1.2744 | 0.3809 | 1.2744 | 1.1289 | | 0.5284 | 3.4518 | 680 | 1.2472 | 0.3757 | 1.2472 | 1.1168 | | 0.5284 | 3.4619 | 682 | 1.2509 | 0.3757 | 1.2509 | 1.1184 | | 0.5284 | 3.4721 | 684 | 1.2139 | 0.4008 | 1.2139 | 1.1018 | | 0.5284 | 3.4822 | 686 | 1.2269 | 0.4030 | 1.2269 | 1.1077 | | 0.5284 | 3.4924 | 688 | 1.2773 | 0.4356 | 1.2773 | 1.1302 | | 0.5284 | 3.5025 | 690 | 1.3377 | 0.4120 | 1.3377 | 1.1566 | | 0.5284 | 3.5127 | 692 | 1.3525 | 0.3975 | 1.3525 | 1.1630 | | 0.5284 | 3.5228 | 694 | 1.3712 | 0.3918 | 1.3712 | 1.1710 | | 0.5284 | 3.5330 | 696 | 1.4345 | 0.3797 | 1.4345 | 1.1977 | | 0.5284 | 3.5431 | 698 | 1.5644 | 0.3485 | 1.5644 | 1.2508 | | 0.5284 | 3.5533 | 700 | 1.7482 | 0.3244 | 1.7482 | 1.3222 | | 0.5284 | 3.5635 | 702 | 1.8171 | 0.3104 | 1.8171 | 1.3480 | | 0.5284 | 3.5736 | 704 | 1.8869 | 0.2892 | 1.8869 | 1.3736 | | 0.5284 | 3.5838 | 706 | 1.8149 | 0.3199 | 1.8149 | 1.3472 | | 0.5284 | 3.5939 | 708 | 1.6114 | 0.3510 | 1.6114 | 1.2694 | | 0.5284 | 3.6041 | 710 | 1.4590 | 0.3753 | 1.4590 | 1.2079 | | 0.5284 | 3.6142 | 712 | 1.4722 | 0.3753 | 1.4722 | 1.2134 | | 0.5284 | 3.6244 | 714 | 1.5388 | 0.3753 | 1.5388 | 1.2405 | | 0.5284 | 3.6345 | 716 | 1.6785 | 0.3373 | 1.6785 | 1.2956 | | 0.5284 | 3.6447 | 718 | 1.7689 | 0.3346 | 1.7689 | 1.3300 | | 0.5284 | 3.6548 | 720 | 1.8020 | 0.3346 | 1.8020 | 1.3424 | | 0.5284 | 3.6650 | 722 | 1.7927 | 0.3346 | 1.7927 | 1.3389 | | 0.5284 | 3.6751 | 724 | 1.6317 | 0.3549 | 1.6317 | 1.2774 | | 0.5284 | 3.6853 | 726 | 1.4984 | 0.3872 | 1.4984 | 1.2241 | | 0.5284 | 3.6954 | 728 | 1.4357 | 0.3994 | 1.4357 | 1.1982 | | 0.5284 | 3.7056 | 730 | 1.3327 | 0.4257 | 1.3327 | 1.1544 | | 0.5284 | 3.7157 | 732 | 1.2696 | 0.4257 | 1.2696 | 1.1268 | | 0.5284 | 3.7259 | 734 | 1.2579 | 0.4525 | 1.2579 | 1.1216 | | 0.5284 | 3.7360 | 736 | 1.3143 | 0.4257 | 1.3143 | 1.1464 | | 0.5284 | 3.7462 | 738 | 1.3930 | 0.4120 | 1.3930 | 1.1803 | | 0.5284 | 3.7563 | 740 | 1.5528 | 0.3459 | 1.5528 | 1.2461 | | 0.5284 | 3.7665 | 742 | 1.6403 | 0.3187 | 1.6403 | 1.2807 | | 0.5284 | 3.7766 | 744 | 1.6723 | 0.3187 | 1.6723 | 1.2932 | | 0.5284 | 3.7868 | 746 | 1.7027 | 0.3187 | 1.7027 | 1.3049 | | 0.5284 | 3.7970 | 748 | 1.7128 | 0.3187 | 1.7128 | 1.3087 | | 0.5284 | 3.8071 | 750 | 1.6065 | 0.3393 | 1.6065 | 1.2675 | | 0.5284 | 3.8173 | 752 | 1.4926 | 0.3665 | 1.4926 | 1.2217 | | 0.5284 | 3.8274 | 754 | 1.5220 | 0.3665 | 1.5220 | 1.2337 | | 0.5284 | 3.8376 | 756 | 1.5909 | 0.3393 | 1.5909 | 1.2613 | | 0.5284 | 3.8477 | 758 | 1.5705 | 0.3393 | 1.5705 | 1.2532 | | 0.5284 | 3.8579 | 760 | 1.4894 | 0.3717 | 1.4894 | 1.2204 | | 0.5284 | 3.8680 | 762 | 1.4744 | 0.3924 | 1.4744 | 1.2143 | | 0.5284 | 3.8782 | 764 | 1.5339 | 0.3454 | 1.5339 | 1.2385 | | 0.5284 | 3.8883 | 766 | 1.6376 | 0.3187 | 1.6376 | 1.2797 | | 0.5284 | 3.8985 | 768 | 1.6725 | 0.3079 | 1.6725 | 1.2932 | | 0.5284 | 3.9086 | 770 | 1.7426 | 0.3110 | 1.7426 | 1.3201 | | 0.5284 | 3.9188 | 772 | 1.7953 | 0.2943 | 1.7953 | 1.3399 | | 0.5284 | 3.9289 | 774 | 1.7128 | 0.3110 | 1.7128 | 1.3087 | | 0.5284 | 3.9391 | 776 | 1.5801 | 0.3508 | 1.5801 | 1.2570 | | 0.5284 | 3.9492 | 778 | 1.5373 | 0.3599 | 1.5373 | 1.2399 | | 0.5284 | 3.9594 | 780 | 1.5049 | 0.3599 | 1.5049 | 1.2268 | | 0.5284 | 3.9695 | 782 | 1.4570 | 0.3872 | 1.4570 | 1.2070 | | 0.5284 | 3.9797 | 784 | 1.5558 | 0.3599 | 1.5558 | 1.2473 | | 0.5284 | 3.9898 | 786 | 1.7754 | 0.3217 | 1.7754 | 1.3324 | | 0.5284 | 4.0 | 788 | 1.9745 | 0.3048 | 1.9745 | 1.4052 | | 0.5284 | 4.0102 | 790 | 2.1551 | 0.2181 | 2.1551 | 1.4680 | | 0.5284 | 4.0203 | 792 | 2.1627 | 0.2181 | 2.1627 | 1.4706 | | 0.5284 | 4.0305 | 794 | 2.0033 | 0.275 | 2.0033 | 1.4154 | | 0.5284 | 4.0406 | 796 | 1.7694 | 0.3217 | 1.7694 | 1.3302 | | 0.5284 | 4.0508 | 798 | 1.4905 | 0.3665 | 1.4905 | 1.2209 | | 0.5284 | 4.0609 | 800 | 1.3142 | 0.4034 | 1.3142 | 1.1464 | | 0.5284 | 4.0711 | 802 | 1.2613 | 0.4384 | 1.2613 | 1.1231 | | 0.5284 | 4.0812 | 804 | 1.3167 | 0.4034 | 1.3167 | 1.1475 | | 0.5284 | 4.0914 | 806 | 1.4765 | 0.3599 | 1.4765 | 1.2151 | | 0.5284 | 4.1015 | 808 | 1.5606 | 0.3599 | 1.5606 | 1.2493 | | 0.5284 | 4.1117 | 810 | 1.5849 | 0.3599 | 1.5849 | 1.2589 | | 0.5284 | 4.1218 | 812 | 1.5482 | 0.3599 | 1.5482 | 1.2443 | | 0.5284 | 4.1320 | 814 | 1.5372 | 0.3599 | 1.5372 | 1.2398 | | 0.5284 | 4.1421 | 816 | 1.5115 | 0.3599 | 1.5115 | 1.2294 | | 0.5284 | 4.1523 | 818 | 1.5015 | 0.3839 | 1.5015 | 1.2254 | | 0.5284 | 4.1624 | 820 | 1.5348 | 0.3839 | 1.5348 | 1.2389 | | 0.5284 | 4.1726 | 822 | 1.6221 | 0.3485 | 1.6221 | 1.2736 | | 0.5284 | 4.1827 | 824 | 1.7254 | 0.3203 | 1.7254 | 1.3136 | | 0.5284 | 4.1929 | 826 | 1.7164 | 0.3203 | 1.7164 | 1.3101 | | 0.5284 | 4.2030 | 828 | 1.6230 | 0.3508 | 1.6230 | 1.2740 | | 0.5284 | 4.2132 | 830 | 1.4822 | 0.3817 | 1.4822 | 1.2175 | | 0.5284 | 4.2234 | 832 | 1.3981 | 0.3928 | 1.3981 | 1.1824 | | 0.5284 | 4.2335 | 834 | 1.3523 | 0.3730 | 1.3523 | 1.1629 | | 0.5284 | 4.2437 | 836 | 1.3582 | 0.3730 | 1.3582 | 1.1654 | | 0.5284 | 4.2538 | 838 | 1.3862 | 0.3928 | 1.3862 | 1.1774 | | 0.5284 | 4.2640 | 840 | 1.4899 | 0.4075 | 1.4899 | 1.2206 | | 0.5284 | 4.2741 | 842 | 1.5644 | 0.3508 | 1.5644 | 1.2507 | | 0.5284 | 4.2843 | 844 | 1.6005 | 0.3626 | 1.6005 | 1.2651 | | 0.5284 | 4.2944 | 846 | 1.5799 | 0.3414 | 1.5799 | 1.2569 | | 0.5284 | 4.3046 | 848 | 1.5621 | 0.3414 | 1.5621 | 1.2499 | | 0.5284 | 4.3147 | 850 | 1.5348 | 0.3839 | 1.5348 | 1.2389 | | 0.5284 | 4.3249 | 852 | 1.5055 | 0.3839 | 1.5055 | 1.2270 | | 0.5284 | 4.3350 | 854 | 1.4350 | 0.4227 | 1.4350 | 1.1979 | | 0.5284 | 4.3452 | 856 | 1.4387 | 0.4227 | 1.4387 | 1.1995 | | 0.5284 | 4.3553 | 858 | 1.4805 | 0.3964 | 1.4805 | 1.2167 | | 0.5284 | 4.3655 | 860 | 1.4699 | 0.3964 | 1.4699 | 1.2124 | | 0.5284 | 4.3756 | 862 | 1.4224 | 0.4094 | 1.4224 | 1.1926 | | 0.5284 | 4.3858 | 864 | 1.4410 | 0.3964 | 1.4410 | 1.2004 | | 0.5284 | 4.3959 | 866 | 1.5077 | 0.3839 | 1.5077 | 1.2279 | | 0.5284 | 4.4061 | 868 | 1.5477 | 0.3599 | 1.5477 | 1.2441 | | 0.5284 | 4.4162 | 870 | 1.5222 | 0.3599 | 1.5222 | 1.2338 | | 0.5284 | 4.4264 | 872 | 1.4573 | 0.3717 | 1.4573 | 1.2072 | | 0.5284 | 4.4365 | 874 | 1.3934 | 0.4227 | 1.3934 | 1.1804 | | 0.5284 | 4.4467 | 876 | 1.3466 | 0.4438 | 1.3466 | 1.1604 | | 0.5284 | 4.4569 | 878 | 1.3260 | 0.4438 | 1.3260 | 1.1515 | | 0.5284 | 4.4670 | 880 | 1.2943 | 0.4425 | 1.2943 | 1.1377 | | 0.5284 | 4.4772 | 882 | 1.2609 | 0.3837 | 1.2609 | 1.1229 | | 0.5284 | 4.4873 | 884 | 1.2628 | 0.3837 | 1.2628 | 1.1237 | | 0.5284 | 4.4975 | 886 | 1.2358 | 0.3837 | 1.2358 | 1.1117 | | 0.5284 | 4.5076 | 888 | 1.2485 | 0.4234 | 1.2485 | 1.1174 | | 0.5284 | 4.5178 | 890 | 1.2785 | 0.4425 | 1.2785 | 1.1307 | | 0.5284 | 4.5279 | 892 | 1.3263 | 0.4299 | 1.3263 | 1.1517 | | 0.5284 | 4.5381 | 894 | 1.4629 | 0.3784 | 1.4629 | 1.2095 | | 0.5284 | 4.5482 | 896 | 1.6041 | 0.3574 | 1.6041 | 1.2665 | | 0.5284 | 4.5584 | 898 | 1.7082 | 0.3581 | 1.7082 | 1.3070 | | 0.5284 | 4.5685 | 900 | 1.7740 | 0.3478 | 1.7740 | 1.3319 | | 0.5284 | 4.5787 | 902 | 1.7166 | 0.3581 | 1.7166 | 1.3102 | | 0.5284 | 4.5888 | 904 | 1.5625 | 0.3930 | 1.5625 | 1.2500 | | 0.5284 | 4.5990 | 906 | 1.4819 | 0.3907 | 1.4819 | 1.2173 | | 0.5284 | 4.6091 | 908 | 1.3921 | 0.4034 | 1.3921 | 1.1799 | | 0.5284 | 4.6193 | 910 | 1.4049 | 0.4034 | 1.4049 | 1.1853 | | 0.5284 | 4.6294 | 912 | 1.4146 | 0.4034 | 1.4146 | 1.1894 | | 0.5284 | 4.6396 | 914 | 1.4112 | 0.4075 | 1.4112 | 1.1879 | | 0.5284 | 4.6497 | 916 | 1.4195 | 0.4037 | 1.4195 | 1.1914 | | 0.5284 | 4.6599 | 918 | 1.3988 | 0.3783 | 1.3988 | 1.1827 | | 0.5284 | 4.6701 | 920 | 1.4171 | 0.3537 | 1.4171 | 1.1904 | | 0.5284 | 4.6802 | 922 | 1.4338 | 0.3732 | 1.4338 | 1.1974 | | 0.5284 | 4.6904 | 924 | 1.4367 | 0.3732 | 1.4367 | 1.1986 | | 0.5284 | 4.7005 | 926 | 1.4087 | 0.3732 | 1.4087 | 1.1869 | | 0.5284 | 4.7107 | 928 | 1.3372 | 0.3783 | 1.3372 | 1.1564 | | 0.5284 | 4.7208 | 930 | 1.2506 | 0.3757 | 1.2506 | 1.1183 | | 0.5284 | 4.7310 | 932 | 1.2019 | 0.3757 | 1.2019 | 1.0963 | | 0.5284 | 4.7411 | 934 | 1.1984 | 0.3757 | 1.1984 | 1.0947 | | 0.5284 | 4.7513 | 936 | 1.2544 | 0.3783 | 1.2544 | 1.1200 | | 0.5284 | 4.7614 | 938 | 1.3796 | 0.4037 | 1.3796 | 1.1746 | | 0.5284 | 4.7716 | 940 | 1.4504 | 0.3902 | 1.4504 | 1.2043 | | 0.5284 | 4.7817 | 942 | 1.4701 | 0.3964 | 1.4701 | 1.2125 | | 0.5284 | 4.7919 | 944 | 1.4569 | 0.3964 | 1.4569 | 1.2070 | | 0.5284 | 4.8020 | 946 | 1.4255 | 0.3964 | 1.4255 | 1.1939 | | 0.5284 | 4.8122 | 948 | 1.4290 | 0.3964 | 1.4290 | 1.1954 | | 0.5284 | 4.8223 | 950 | 1.4582 | 0.3964 | 1.4582 | 1.2075 | | 0.5284 | 4.8325 | 952 | 1.4857 | 0.3964 | 1.4857 | 1.2189 | | 0.5284 | 4.8426 | 954 | 1.5658 | 0.3964 | 1.5658 | 1.2513 | | 0.5284 | 4.8528 | 956 | 1.6055 | 0.3771 | 1.6055 | 1.2671 | | 0.5284 | 4.8629 | 958 | 1.6249 | 0.3420 | 1.6249 | 1.2747 | | 0.5284 | 4.8731 | 960 | 1.5502 | 0.3626 | 1.5502 | 1.2451 | | 0.5284 | 4.8832 | 962 | 1.4480 | 0.3964 | 1.4480 | 1.2033 | | 0.5284 | 4.8934 | 964 | 1.4218 | 0.3964 | 1.4218 | 1.1924 | | 0.5284 | 4.9036 | 966 | 1.4205 | 0.3964 | 1.4205 | 1.1918 | | 0.5284 | 4.9137 | 968 | 1.4326 | 0.3964 | 1.4326 | 1.1969 | | 0.5284 | 4.9239 | 970 | 1.4456 | 0.3839 | 1.4456 | 1.2023 | | 0.5284 | 4.9340 | 972 | 1.3779 | 0.3964 | 1.3779 | 1.1739 | | 0.5284 | 4.9442 | 974 | 1.3687 | 0.3839 | 1.3687 | 1.1699 | | 0.5284 | 4.9543 | 976 | 1.4781 | 0.3717 | 1.4781 | 1.2158 | | 0.5284 | 4.9645 | 978 | 1.5735 | 0.3623 | 1.5735 | 1.2544 | | 0.5284 | 4.9746 | 980 | 1.5661 | 0.3623 | 1.5661 | 1.2514 | | 0.5284 | 4.9848 | 982 | 1.5245 | 0.3623 | 1.5245 | 1.2347 | | 0.5284 | 4.9949 | 984 | 1.5196 | 0.3623 | 1.5196 | 1.2327 | | 0.5284 | 5.0051 | 986 | 1.5446 | 0.3623 | 1.5446 | 1.2428 | | 0.5284 | 5.0152 | 988 | 1.5056 | 0.3599 | 1.5056 | 1.2270 | | 0.5284 | 5.0254 | 990 | 1.4518 | 0.3665 | 1.4518 | 1.2049 | | 0.5284 | 5.0355 | 992 | 1.3828 | 0.3851 | 1.3828 | 1.1759 | | 0.5284 | 5.0457 | 994 | 1.3583 | 0.3851 | 1.3583 | 1.1655 | | 0.5284 | 5.0558 | 996 | 1.4214 | 0.3784 | 1.4214 | 1.1922 | | 0.5284 | 5.0660 | 998 | 1.5032 | 0.3599 | 1.5032 | 1.2260 | | 0.0907 | 5.0761 | 1000 | 1.6010 | 0.3420 | 1.6010 | 1.2653 | | 0.0907 | 5.0863 | 1002 | 1.6207 | 0.3420 | 1.6207 | 1.2731 | | 0.0907 | 5.0964 | 1004 | 1.5803 | 0.3623 | 1.5803 | 1.2571 | | 0.0907 | 5.1066 | 1006 | 1.4996 | 0.3717 | 1.4996 | 1.2246 | | 0.0907 | 5.1168 | 1008 | 1.4155 | 0.3964 | 1.4155 | 1.1897 | | 0.0907 | 5.1269 | 1010 | 1.3612 | 0.3964 | 1.3612 | 1.1667 | | 0.0907 | 5.1371 | 1012 | 1.3353 | 0.3964 | 1.3353 | 1.1555 | | 0.0907 | 5.1472 | 1014 | 1.3589 | 0.3964 | 1.3589 | 1.1657 | | 0.0907 | 5.1574 | 1016 | 1.3874 | 0.3944 | 1.3874 | 1.1779 | | 0.0907 | 5.1675 | 1018 | 1.3714 | 0.4056 | 1.3714 | 1.1711 | | 0.0907 | 5.1777 | 1020 | 1.3365 | 0.4194 | 1.3365 | 1.1561 | | 0.0907 | 5.1878 | 1022 | 1.3154 | 0.4194 | 1.3154 | 1.1469 | | 0.0907 | 5.1980 | 1024 | 1.3042 | 0.4194 | 1.3042 | 1.1420 | | 0.0907 | 5.2081 | 1026 | 1.3050 | 0.4194 | 1.3050 | 1.1424 | | 0.0907 | 5.2183 | 1028 | 1.3582 | 0.4075 | 1.3582 | 1.1654 | | 0.0907 | 5.2284 | 1030 | 1.4132 | 0.3944 | 1.4132 | 1.1888 | | 0.0907 | 5.2386 | 1032 | 1.4290 | 0.3817 | 1.4290 | 1.1954 | | 0.0907 | 5.2487 | 1034 | 1.4584 | 0.3817 | 1.4584 | 1.2076 | | 0.0907 | 5.2589 | 1036 | 1.4558 | 0.3817 | 1.4558 | 1.2066 | | 0.0907 | 5.2690 | 1038 | 1.4209 | 0.3817 | 1.4209 | 1.1920 | | 0.0907 | 5.2792 | 1040 | 1.4132 | 0.3817 | 1.4132 | 1.1888 | | 0.0907 | 5.2893 | 1042 | 1.4004 | 0.3944 | 1.4004 | 1.1834 | | 0.0907 | 5.2995 | 1044 | 1.3842 | 0.3944 | 1.3842 | 1.1765 | | 0.0907 | 5.3096 | 1046 | 1.3473 | 0.3944 | 1.3473 | 1.1607 | | 0.0907 | 5.3198 | 1048 | 1.3086 | 0.3944 | 1.3086 | 1.1440 | | 0.0907 | 5.3299 | 1050 | 1.3286 | 0.3944 | 1.3286 | 1.1526 | | 0.0907 | 5.3401 | 1052 | 1.4090 | 0.3964 | 1.4090 | 1.1870 | | 0.0907 | 5.3503 | 1054 | 1.4855 | 0.3771 | 1.4855 | 1.2188 | | 0.0907 | 5.3604 | 1056 | 1.5330 | 0.4021 | 1.5330 | 1.2381 | | 0.0907 | 5.3706 | 1058 | 1.5060 | 0.3771 | 1.5060 | 1.2272 | | 0.0907 | 5.3807 | 1060 | 1.4675 | 0.3748 | 1.4675 | 1.2114 | | 0.0907 | 5.3909 | 1062 | 1.4434 | 0.3964 | 1.4434 | 1.2014 | | 0.0907 | 5.4010 | 1064 | 1.4584 | 0.3964 | 1.4584 | 1.2076 | | 0.0907 | 5.4112 | 1066 | 1.4623 | 0.4213 | 1.4623 | 1.2092 | | 0.0907 | 5.4213 | 1068 | 1.4786 | 0.3964 | 1.4786 | 1.2160 | | 0.0907 | 5.4315 | 1070 | 1.4968 | 0.3860 | 1.4968 | 1.2234 | | 0.0907 | 5.4416 | 1072 | 1.5275 | 0.3740 | 1.5275 | 1.2359 | | 0.0907 | 5.4518 | 1074 | 1.5278 | 0.3740 | 1.5278 | 1.2360 | | 0.0907 | 5.4619 | 1076 | 1.4992 | 0.3984 | 1.4992 | 1.2244 | | 0.0907 | 5.4721 | 1078 | 1.4961 | 0.3984 | 1.4961 | 1.2232 | | 0.0907 | 5.4822 | 1080 | 1.4782 | 0.3964 | 1.4782 | 1.2158 | | 0.0907 | 5.4924 | 1082 | 1.5015 | 0.3984 | 1.5015 | 1.2254 | | 0.0907 | 5.5025 | 1084 | 1.5461 | 0.3984 | 1.5461 | 1.2434 | | 0.0907 | 5.5127 | 1086 | 1.5267 | 0.3984 | 1.5267 | 1.2356 | | 0.0907 | 5.5228 | 1088 | 1.5042 | 0.4048 | 1.5042 | 1.2265 | | 0.0907 | 5.5330 | 1090 | 1.4695 | 0.4096 | 1.4695 | 1.2122 | | 0.0907 | 5.5431 | 1092 | 1.4732 | 0.4096 | 1.4732 | 1.2137 | | 0.0907 | 5.5533 | 1094 | 1.3891 | 0.4061 | 1.3891 | 1.1786 | | 0.0907 | 5.5635 | 1096 | 1.3202 | 0.4320 | 1.3202 | 1.1490 | | 0.0907 | 5.5736 | 1098 | 1.3027 | 0.4455 | 1.3027 | 1.1414 | | 0.0907 | 5.5838 | 1100 | 1.2956 | 0.4455 | 1.2956 | 1.1382 | | 0.0907 | 5.5939 | 1102 | 1.3373 | 0.4320 | 1.3373 | 1.1564 | | 0.0907 | 5.6041 | 1104 | 1.3786 | 0.4188 | 1.3786 | 1.1741 | | 0.0907 | 5.6142 | 1106 | 1.4516 | 0.4314 | 1.4516 | 1.2048 | | 0.0907 | 5.6244 | 1108 | 1.4734 | 0.4314 | 1.4734 | 1.2139 | | 0.0907 | 5.6345 | 1110 | 1.4620 | 0.4314 | 1.4620 | 1.2091 | | 0.0907 | 5.6447 | 1112 | 1.4709 | 0.4314 | 1.4709 | 1.2128 | | 0.0907 | 5.6548 | 1114 | 1.5099 | 0.4048 | 1.5099 | 1.2288 | | 0.0907 | 5.6650 | 1116 | 1.5447 | 0.3984 | 1.5447 | 1.2429 | | 0.0907 | 5.6751 | 1118 | 1.5612 | 0.3984 | 1.5612 | 1.2495 | | 0.0907 | 5.6853 | 1120 | 1.5348 | 0.4104 | 1.5348 | 1.2389 | | 0.0907 | 5.6954 | 1122 | 1.5056 | 0.3839 | 1.5056 | 1.2270 | | 0.0907 | 5.7056 | 1124 | 1.4837 | 0.3839 | 1.4837 | 1.2181 | | 0.0907 | 5.7157 | 1126 | 1.4548 | 0.3964 | 1.4548 | 1.2061 | | 0.0907 | 5.7259 | 1128 | 1.4275 | 0.3944 | 1.4275 | 1.1948 | | 0.0907 | 5.7360 | 1130 | 1.3993 | 0.3944 | 1.3993 | 1.1829 | | 0.0907 | 5.7462 | 1132 | 1.3821 | 0.3944 | 1.3821 | 1.1756 | | 0.0907 | 5.7563 | 1134 | 1.3825 | 0.3944 | 1.3825 | 1.1758 | | 0.0907 | 5.7665 | 1136 | 1.3684 | 0.3944 | 1.3684 | 1.1698 | | 0.0907 | 5.7766 | 1138 | 1.3718 | 0.3944 | 1.3718 | 1.1712 | | 0.0907 | 5.7868 | 1140 | 1.3653 | 0.3944 | 1.3653 | 1.1684 | | 0.0907 | 5.7970 | 1142 | 1.3908 | 0.3944 | 1.3908 | 1.1793 | | 0.0907 | 5.8071 | 1144 | 1.4315 | 0.3964 | 1.4315 | 1.1964 | | 0.0907 | 5.8173 | 1146 | 1.4996 | 0.3839 | 1.4996 | 1.2246 | | 0.0907 | 5.8274 | 1148 | 1.5586 | 0.3717 | 1.5586 | 1.2485 | | 0.0907 | 5.8376 | 1150 | 1.5646 | 0.3717 | 1.5646 | 1.2508 | | 0.0907 | 5.8477 | 1152 | 1.5795 | 0.3717 | 1.5795 | 1.2568 | | 0.0907 | 5.8579 | 1154 | 1.5895 | 0.3717 | 1.5895 | 1.2608 | | 0.0907 | 5.8680 | 1156 | 1.5610 | 0.3839 | 1.5610 | 1.2494 | | 0.0907 | 5.8782 | 1158 | 1.5605 | 0.3839 | 1.5605 | 1.2492 | | 0.0907 | 5.8883 | 1160 | 1.5514 | 0.3839 | 1.5514 | 1.2456 | | 0.0907 | 5.8985 | 1162 | 1.5744 | 0.3839 | 1.5744 | 1.2548 | | 0.0907 | 5.9086 | 1164 | 1.5864 | 0.3839 | 1.5864 | 1.2595 | | 0.0907 | 5.9188 | 1166 | 1.5723 | 0.3839 | 1.5723 | 1.2539 | | 0.0907 | 5.9289 | 1168 | 1.5377 | 0.3964 | 1.5377 | 1.2401 | | 0.0907 | 5.9391 | 1170 | 1.4999 | 0.3964 | 1.4999 | 1.2247 | | 0.0907 | 5.9492 | 1172 | 1.4924 | 0.3964 | 1.4924 | 1.2216 | | 0.0907 | 5.9594 | 1174 | 1.4776 | 0.3964 | 1.4776 | 1.2156 | | 0.0907 | 5.9695 | 1176 | 1.4509 | 0.3964 | 1.4509 | 1.2045 | | 0.0907 | 5.9797 | 1178 | 1.4415 | 0.3964 | 1.4415 | 1.2006 | | 0.0907 | 5.9898 | 1180 | 1.4440 | 0.3964 | 1.4440 | 1.2017 | | 0.0907 | 6.0 | 1182 | 1.4587 | 0.3964 | 1.4587 | 1.2078 | | 0.0907 | 6.0102 | 1184 | 1.5045 | 0.3964 | 1.5045 | 1.2266 | | 0.0907 | 6.0203 | 1186 | 1.5323 | 0.3964 | 1.5323 | 1.2379 | | 0.0907 | 6.0305 | 1188 | 1.5787 | 0.3599 | 1.5787 | 1.2565 | | 0.0907 | 6.0406 | 1190 | 1.6441 | 0.3420 | 1.6441 | 1.2822 | | 0.0907 | 6.0508 | 1192 | 1.6307 | 0.3420 | 1.6307 | 1.2770 | | 0.0907 | 6.0609 | 1194 | 1.5705 | 0.3717 | 1.5705 | 1.2532 | | 0.0907 | 6.0711 | 1196 | 1.5433 | 0.3839 | 1.5433 | 1.2423 | | 0.0907 | 6.0812 | 1198 | 1.5312 | 0.3964 | 1.5312 | 1.2374 | | 0.0907 | 6.0914 | 1200 | 1.5539 | 0.3839 | 1.5539 | 1.2465 | | 0.0907 | 6.1015 | 1202 | 1.5658 | 0.3626 | 1.5658 | 1.2513 | | 0.0907 | 6.1117 | 1204 | 1.5423 | 0.3964 | 1.5423 | 1.2419 | | 0.0907 | 6.1218 | 1206 | 1.5025 | 0.3964 | 1.5025 | 1.2258 | | 0.0907 | 6.1320 | 1208 | 1.4517 | 0.3964 | 1.4517 | 1.2049 | | 0.0907 | 6.1421 | 1210 | 1.4372 | 0.3964 | 1.4372 | 1.1988 | | 0.0907 | 6.1523 | 1212 | 1.4201 | 0.4034 | 1.4201 | 1.1917 | | 0.0907 | 6.1624 | 1214 | 1.4518 | 0.3839 | 1.4518 | 1.2049 | | 0.0907 | 6.1726 | 1216 | 1.5119 | 0.3717 | 1.5119 | 1.2296 | | 0.0907 | 6.1827 | 1218 | 1.6044 | 0.3623 | 1.6044 | 1.2666 | | 0.0907 | 6.1929 | 1220 | 1.6235 | 0.3535 | 1.6235 | 1.2741 | | 0.0907 | 6.2030 | 1222 | 1.6280 | 0.3535 | 1.6280 | 1.2759 | | 0.0907 | 6.2132 | 1224 | 1.6255 | 0.3535 | 1.6255 | 1.2749 | | 0.0907 | 6.2234 | 1226 | 1.5472 | 0.3599 | 1.5472 | 1.2439 | | 0.0907 | 6.2335 | 1228 | 1.4480 | 0.3851 | 1.4480 | 1.2033 | | 0.0907 | 6.2437 | 1230 | 1.3637 | 0.4188 | 1.3637 | 1.1678 | | 0.0907 | 6.2538 | 1232 | 1.3462 | 0.4172 | 1.3462 | 1.1602 | | 0.0907 | 6.2640 | 1234 | 1.3143 | 0.4305 | 1.3143 | 1.1464 | | 0.0907 | 6.2741 | 1236 | 1.3125 | 0.4305 | 1.3125 | 1.1456 | | 0.0907 | 6.2843 | 1238 | 1.3393 | 0.4034 | 1.3393 | 1.1573 | | 0.0907 | 6.2944 | 1240 | 1.3658 | 0.4034 | 1.3658 | 1.1687 | | 0.0907 | 6.3046 | 1242 | 1.3448 | 0.4164 | 1.3448 | 1.1596 | | 0.0907 | 6.3147 | 1244 | 1.3532 | 0.4164 | 1.3532 | 1.1633 | | 0.0907 | 6.3249 | 1246 | 1.3519 | 0.4164 | 1.3519 | 1.1627 | | 0.0907 | 6.3350 | 1248 | 1.3832 | 0.4094 | 1.3832 | 1.1761 | | 0.0907 | 6.3452 | 1250 | 1.4593 | 0.3964 | 1.4593 | 1.2080 | | 0.0907 | 6.3553 | 1252 | 1.5469 | 0.3748 | 1.5469 | 1.2437 | | 0.0907 | 6.3655 | 1254 | 1.5942 | 0.4003 | 1.5942 | 1.2626 | | 0.0907 | 6.3756 | 1256 | 1.5688 | 0.4003 | 1.5688 | 1.2525 | | 0.0907 | 6.3858 | 1258 | 1.5452 | 0.4213 | 1.5452 | 1.2430 | | 0.0907 | 6.3959 | 1260 | 1.5092 | 0.4213 | 1.5092 | 1.2285 | | 0.0907 | 6.4061 | 1262 | 1.4431 | 0.4094 | 1.4431 | 1.2013 | | 0.0907 | 6.4162 | 1264 | 1.3907 | 0.4147 | 1.3907 | 1.1793 | | 0.0907 | 6.4264 | 1266 | 1.3691 | 0.4129 | 1.3691 | 1.1701 | | 0.0907 | 6.4365 | 1268 | 1.3800 | 0.4056 | 1.3800 | 1.1747 | | 0.0907 | 6.4467 | 1270 | 1.3556 | 0.4037 | 1.3556 | 1.1643 | | 0.0907 | 6.4569 | 1272 | 1.3262 | 0.3857 | 1.3262 | 1.1516 | | 0.0907 | 6.4670 | 1274 | 1.3392 | 0.3857 | 1.3392 | 1.1572 | | 0.0907 | 6.4772 | 1276 | 1.3816 | 0.3806 | 1.3816 | 1.1754 | | 0.0907 | 6.4873 | 1278 | 1.4484 | 0.3852 | 1.4484 | 1.2035 | | 0.0907 | 6.4975 | 1280 | 1.5480 | 0.3874 | 1.5480 | 1.2442 | | 0.0907 | 6.5076 | 1282 | 1.6232 | 0.3534 | 1.6232 | 1.2740 | | 0.0907 | 6.5178 | 1284 | 1.6269 | 0.3420 | 1.6269 | 1.2755 | | 0.0907 | 6.5279 | 1286 | 1.5720 | 0.3651 | 1.5720 | 1.2538 | | 0.0907 | 6.5381 | 1288 | 1.4994 | 0.3964 | 1.4994 | 1.2245 | | 0.0907 | 6.5482 | 1290 | 1.4709 | 0.3964 | 1.4709 | 1.2128 | | 0.0907 | 6.5584 | 1292 | 1.5037 | 0.3839 | 1.5037 | 1.2262 | | 0.0907 | 6.5685 | 1294 | 1.5140 | 0.3626 | 1.5140 | 1.2305 | | 0.0907 | 6.5787 | 1296 | 1.5046 | 0.3817 | 1.5046 | 1.2266 | | 0.0907 | 6.5888 | 1298 | 1.4945 | 0.3817 | 1.4945 | 1.2225 | | 0.0907 | 6.5990 | 1300 | 1.4607 | 0.3817 | 1.4607 | 1.2086 | | 0.0907 | 6.6091 | 1302 | 1.4484 | 0.3817 | 1.4484 | 1.2035 | | 0.0907 | 6.6193 | 1304 | 1.4224 | 0.3817 | 1.4224 | 1.1926 | | 0.0907 | 6.6294 | 1306 | 1.4150 | 0.3817 | 1.4150 | 1.1895 | | 0.0907 | 6.6396 | 1308 | 1.4240 | 0.3817 | 1.4240 | 1.1933 | | 0.0907 | 6.6497 | 1310 | 1.4150 | 0.3817 | 1.4150 | 1.1895 | | 0.0907 | 6.6599 | 1312 | 1.3992 | 0.3817 | 1.3992 | 1.1829 | | 0.0907 | 6.6701 | 1314 | 1.4070 | 0.3817 | 1.4070 | 1.1862 | | 0.0907 | 6.6802 | 1316 | 1.4243 | 0.3817 | 1.4243 | 1.1935 | | 0.0907 | 6.6904 | 1318 | 1.4434 | 0.3817 | 1.4434 | 1.2014 | | 0.0907 | 6.7005 | 1320 | 1.4626 | 0.3817 | 1.4626 | 1.2094 | | 0.0907 | 6.7107 | 1322 | 1.4729 | 0.3817 | 1.4729 | 1.2136 | | 0.0907 | 6.7208 | 1324 | 1.4450 | 0.3817 | 1.4450 | 1.2021 | | 0.0907 | 6.7310 | 1326 | 1.3979 | 0.3955 | 1.3979 | 1.1823 | | 0.0907 | 6.7411 | 1328 | 1.3317 | 0.4085 | 1.3317 | 1.1540 | | 0.0907 | 6.7513 | 1330 | 1.3037 | 0.4085 | 1.3037 | 1.1418 | | 0.0907 | 6.7614 | 1332 | 1.2853 | 0.4218 | 1.2853 | 1.1337 | | 0.0907 | 6.7716 | 1334 | 1.2882 | 0.4085 | 1.2882 | 1.1350 | | 0.0907 | 6.7817 | 1336 | 1.3232 | 0.4085 | 1.3232 | 1.1503 | | 0.0907 | 6.7919 | 1338 | 1.3648 | 0.3955 | 1.3648 | 1.1683 | | 0.0907 | 6.8020 | 1340 | 1.3965 | 0.3955 | 1.3965 | 1.1817 | | 0.0907 | 6.8122 | 1342 | 1.4091 | 0.3955 | 1.4091 | 1.1871 | | 0.0907 | 6.8223 | 1344 | 1.4172 | 0.3886 | 1.4172 | 1.1904 | | 0.0907 | 6.8325 | 1346 | 1.4456 | 0.3817 | 1.4456 | 1.2023 | | 0.0907 | 6.8426 | 1348 | 1.4737 | 0.3839 | 1.4737 | 1.2140 | | 0.0907 | 6.8528 | 1350 | 1.4959 | 0.3839 | 1.4959 | 1.2231 | | 0.0907 | 6.8629 | 1352 | 1.4976 | 0.3839 | 1.4976 | 1.2238 | | 0.0907 | 6.8731 | 1354 | 1.4856 | 0.3839 | 1.4856 | 1.2188 | | 0.0907 | 6.8832 | 1356 | 1.4370 | 0.4075 | 1.4370 | 1.1987 | | 0.0907 | 6.8934 | 1358 | 1.3772 | 0.4218 | 1.3772 | 1.1735 | | 0.0907 | 6.9036 | 1360 | 1.3472 | 0.4218 | 1.3472 | 1.1607 | | 0.0907 | 6.9137 | 1362 | 1.3335 | 0.4356 | 1.3335 | 1.1548 | | 0.0907 | 6.9239 | 1364 | 1.3241 | 0.4356 | 1.3241 | 1.1507 | | 0.0907 | 6.9340 | 1366 | 1.3264 | 0.4356 | 1.3264 | 1.1517 | | 0.0907 | 6.9442 | 1368 | 1.3047 | 0.4356 | 1.3047 | 1.1422 | | 0.0907 | 6.9543 | 1370 | 1.2987 | 0.4356 | 1.2987 | 1.1396 | | 0.0907 | 6.9645 | 1372 | 1.3090 | 0.4356 | 1.3090 | 1.1441 | | 0.0907 | 6.9746 | 1374 | 1.3056 | 0.4356 | 1.3056 | 1.1426 | | 0.0907 | 6.9848 | 1376 | 1.3140 | 0.4356 | 1.3140 | 1.1463 | | 0.0907 | 6.9949 | 1378 | 1.3469 | 0.4356 | 1.3469 | 1.1605 | | 0.0907 | 7.0051 | 1380 | 1.3889 | 0.4103 | 1.3889 | 1.1785 | | 0.0907 | 7.0152 | 1382 | 1.4194 | 0.3975 | 1.4194 | 1.1914 | | 0.0907 | 7.0254 | 1384 | 1.4473 | 0.3907 | 1.4473 | 1.2030 | | 0.0907 | 7.0355 | 1386 | 1.4983 | 0.3717 | 1.4983 | 1.2241 | | 0.0907 | 7.0457 | 1388 | 1.5541 | 0.3717 | 1.5541 | 1.2466 | | 0.0907 | 7.0558 | 1390 | 1.5854 | 0.3508 | 1.5854 | 1.2591 | | 0.0907 | 7.0660 | 1392 | 1.6411 | 0.3310 | 1.6411 | 1.2811 | | 0.0907 | 7.0761 | 1394 | 1.6507 | 0.3337 | 1.6507 | 1.2848 | | 0.0907 | 7.0863 | 1396 | 1.6147 | 0.3420 | 1.6147 | 1.2707 | | 0.0907 | 7.0964 | 1398 | 1.5799 | 0.3508 | 1.5799 | 1.2569 | | 0.0907 | 7.1066 | 1400 | 1.5474 | 0.3508 | 1.5474 | 1.2439 | | 0.0907 | 7.1168 | 1402 | 1.5092 | 0.3717 | 1.5092 | 1.2285 | | 0.0907 | 7.1269 | 1404 | 1.4659 | 0.4052 | 1.4659 | 1.2108 | | 0.0907 | 7.1371 | 1406 | 1.4334 | 0.4181 | 1.4334 | 1.1972 | | 0.0907 | 7.1472 | 1408 | 1.4492 | 0.4052 | 1.4492 | 1.2038 | | 0.0907 | 7.1574 | 1410 | 1.4806 | 0.3717 | 1.4806 | 1.2168 | | 0.0907 | 7.1675 | 1412 | 1.5072 | 0.3717 | 1.5072 | 1.2277 | | 0.0907 | 7.1777 | 1414 | 1.5392 | 0.3717 | 1.5392 | 1.2406 | | 0.0907 | 7.1878 | 1416 | 1.5747 | 0.3717 | 1.5747 | 1.2549 | | 0.0907 | 7.1980 | 1418 | 1.6102 | 0.3393 | 1.6102 | 1.2689 | | 0.0907 | 7.2081 | 1420 | 1.6423 | 0.3337 | 1.6423 | 1.2815 | | 0.0907 | 7.2183 | 1422 | 1.6375 | 0.3393 | 1.6375 | 1.2796 | | 0.0907 | 7.2284 | 1424 | 1.6278 | 0.3393 | 1.6278 | 1.2758 | | 0.0907 | 7.2386 | 1426 | 1.5918 | 0.3599 | 1.5918 | 1.2617 | | 0.0907 | 7.2487 | 1428 | 1.5351 | 0.3599 | 1.5351 | 1.2390 | | 0.0907 | 7.2589 | 1430 | 1.4584 | 0.3717 | 1.4584 | 1.2077 | | 0.0907 | 7.2690 | 1432 | 1.3826 | 0.3975 | 1.3826 | 1.1758 | | 0.0907 | 7.2792 | 1434 | 1.3386 | 0.4172 | 1.3386 | 1.1570 | | 0.0907 | 7.2893 | 1436 | 1.3258 | 0.4305 | 1.3258 | 1.1514 | | 0.0907 | 7.2995 | 1438 | 1.3207 | 0.4305 | 1.3207 | 1.1492 | | 0.0907 | 7.3096 | 1440 | 1.3430 | 0.4305 | 1.3430 | 1.1589 | | 0.0907 | 7.3198 | 1442 | 1.3974 | 0.3975 | 1.3974 | 1.1821 | | 0.0907 | 7.3299 | 1444 | 1.4744 | 0.3599 | 1.4744 | 1.2143 | | 0.0907 | 7.3401 | 1446 | 1.5613 | 0.3647 | 1.5613 | 1.2495 | | 0.0907 | 7.3503 | 1448 | 1.6184 | 0.3647 | 1.6184 | 1.2722 | | 0.0907 | 7.3604 | 1450 | 1.6511 | 0.3669 | 1.6511 | 1.2849 | | 0.0907 | 7.3706 | 1452 | 1.6422 | 0.3420 | 1.6422 | 1.2815 | | 0.0907 | 7.3807 | 1454 | 1.5964 | 0.3393 | 1.5964 | 1.2635 | | 0.0907 | 7.3909 | 1456 | 1.5409 | 0.3508 | 1.5409 | 1.2413 | | 0.0907 | 7.4010 | 1458 | 1.4797 | 0.3626 | 1.4797 | 1.2164 | | 0.0907 | 7.4112 | 1460 | 1.4146 | 0.3944 | 1.4146 | 1.1894 | | 0.0907 | 7.4213 | 1462 | 1.3969 | 0.4075 | 1.3969 | 1.1819 | | 0.0907 | 7.4315 | 1464 | 1.4106 | 0.4075 | 1.4106 | 1.1877 | | 0.0907 | 7.4416 | 1466 | 1.4503 | 0.3748 | 1.4503 | 1.2043 | | 0.0907 | 7.4518 | 1468 | 1.5164 | 0.3748 | 1.5164 | 1.2314 | | 0.0907 | 7.4619 | 1470 | 1.5497 | 0.3626 | 1.5497 | 1.2449 | | 0.0907 | 7.4721 | 1472 | 1.5657 | 0.3626 | 1.5657 | 1.2513 | | 0.0907 | 7.4822 | 1474 | 1.5556 | 0.3626 | 1.5556 | 1.2472 | | 0.0907 | 7.4924 | 1476 | 1.5401 | 0.3626 | 1.5401 | 1.2410 | | 0.0907 | 7.5025 | 1478 | 1.5200 | 0.3626 | 1.5200 | 1.2329 | | 0.0907 | 7.5127 | 1480 | 1.5059 | 0.3626 | 1.5059 | 1.2272 | | 0.0907 | 7.5228 | 1482 | 1.5076 | 0.3626 | 1.5076 | 1.2278 | | 0.0907 | 7.5330 | 1484 | 1.5304 | 0.3651 | 1.5304 | 1.2371 | | 0.0907 | 7.5431 | 1486 | 1.5389 | 0.3651 | 1.5389 | 1.2405 | | 0.0907 | 7.5533 | 1488 | 1.5656 | 0.3534 | 1.5656 | 1.2513 | | 0.0907 | 7.5635 | 1490 | 1.5938 | 0.3420 | 1.5938 | 1.2625 | | 0.0907 | 7.5736 | 1492 | 1.5934 | 0.3420 | 1.5934 | 1.2623 | | 0.0907 | 7.5838 | 1494 | 1.5969 | 0.3310 | 1.5969 | 1.2637 | | 0.0907 | 7.5939 | 1496 | 1.5846 | 0.3534 | 1.5846 | 1.2588 | | 0.0907 | 7.6041 | 1498 | 1.5845 | 0.3534 | 1.5845 | 1.2588 | | 0.0523 | 7.6142 | 1500 | 1.5772 | 0.3651 | 1.5772 | 1.2559 | | 0.0523 | 7.6244 | 1502 | 1.5609 | 0.3626 | 1.5609 | 1.2494 | | 0.0523 | 7.6345 | 1504 | 1.5361 | 0.3626 | 1.5361 | 1.2394 | | 0.0523 | 7.6447 | 1506 | 1.5095 | 0.3626 | 1.5095 | 1.2286 | | 0.0523 | 7.6548 | 1508 | 1.4950 | 0.3748 | 1.4950 | 1.2227 | | 0.0523 | 7.6650 | 1510 | 1.5073 | 0.3748 | 1.5073 | 1.2277 | | 0.0523 | 7.6751 | 1512 | 1.5342 | 0.3626 | 1.5342 | 1.2386 | | 0.0523 | 7.6853 | 1514 | 1.5451 | 0.3626 | 1.5451 | 1.2430 | | 0.0523 | 7.6954 | 1516 | 1.5343 | 0.3748 | 1.5343 | 1.2387 | | 0.0523 | 7.7056 | 1518 | 1.5202 | 0.3748 | 1.5202 | 1.2330 | | 0.0523 | 7.7157 | 1520 | 1.5102 | 0.3748 | 1.5102 | 1.2289 | | 0.0523 | 7.7259 | 1522 | 1.5037 | 0.3748 | 1.5037 | 1.2263 | | 0.0523 | 7.7360 | 1524 | 1.5070 | 0.3748 | 1.5070 | 1.2276 | | 0.0523 | 7.7462 | 1526 | 1.5334 | 0.3748 | 1.5334 | 1.2383 | | 0.0523 | 7.7563 | 1528 | 1.5519 | 0.3626 | 1.5519 | 1.2457 | | 0.0523 | 7.7665 | 1530 | 1.5604 | 0.3626 | 1.5604 | 1.2492 | | 0.0523 | 7.7766 | 1532 | 1.5546 | 0.3626 | 1.5546 | 1.2468 | | 0.0523 | 7.7868 | 1534 | 1.5314 | 0.3748 | 1.5314 | 1.2375 | | 0.0523 | 7.7970 | 1536 | 1.5086 | 0.3748 | 1.5086 | 1.2283 | | 0.0523 | 7.8071 | 1538 | 1.5050 | 0.3748 | 1.5050 | 1.2268 | | 0.0523 | 7.8173 | 1540 | 1.5105 | 0.3748 | 1.5105 | 1.2290 | | 0.0523 | 7.8274 | 1542 | 1.5270 | 0.3626 | 1.5270 | 1.2357 | | 0.0523 | 7.8376 | 1544 | 1.5340 | 0.3626 | 1.5340 | 1.2386 | | 0.0523 | 7.8477 | 1546 | 1.5646 | 0.3626 | 1.5646 | 1.2509 | | 0.0523 | 7.8579 | 1548 | 1.6024 | 0.3534 | 1.6024 | 1.2659 | | 0.0523 | 7.8680 | 1550 | 1.6572 | 0.3310 | 1.6572 | 1.2873 | | 0.0523 | 7.8782 | 1552 | 1.7115 | 0.3231 | 1.7115 | 1.3082 | | 0.0523 | 7.8883 | 1554 | 1.7364 | 0.3128 | 1.7364 | 1.3177 | | 0.0523 | 7.8985 | 1556 | 1.7434 | 0.3128 | 1.7434 | 1.3204 | | 0.0523 | 7.9086 | 1558 | 1.7466 | 0.3128 | 1.7466 | 1.3216 | | 0.0523 | 7.9188 | 1560 | 1.7122 | 0.3231 | 1.7122 | 1.3085 | | 0.0523 | 7.9289 | 1562 | 1.6575 | 0.3420 | 1.6575 | 1.2874 | | 0.0523 | 7.9391 | 1564 | 1.6180 | 0.3534 | 1.6180 | 1.2720 | | 0.0523 | 7.9492 | 1566 | 1.5898 | 0.3534 | 1.5898 | 1.2609 | | 0.0523 | 7.9594 | 1568 | 1.5693 | 0.3717 | 1.5693 | 1.2527 | | 0.0523 | 7.9695 | 1570 | 1.5513 | 0.3839 | 1.5513 | 1.2455 | | 0.0523 | 7.9797 | 1572 | 1.5172 | 0.3839 | 1.5172 | 1.2317 | | 0.0523 | 7.9898 | 1574 | 1.4884 | 0.3839 | 1.4884 | 1.2200 | | 0.0523 | 8.0 | 1576 | 1.4749 | 0.3964 | 1.4749 | 1.2145 | | 0.0523 | 8.0102 | 1578 | 1.4544 | 0.3964 | 1.4544 | 1.2060 | | 0.0523 | 8.0203 | 1580 | 1.4526 | 0.3964 | 1.4526 | 1.2052 | | 0.0523 | 8.0305 | 1582 | 1.4829 | 0.3964 | 1.4829 | 1.2177 | | 0.0523 | 8.0406 | 1584 | 1.5346 | 0.3626 | 1.5346 | 1.2388 | | 0.0523 | 8.0508 | 1586 | 1.5692 | 0.3626 | 1.5692 | 1.2527 | | 0.0523 | 8.0609 | 1588 | 1.5948 | 0.3508 | 1.5948 | 1.2629 | | 0.0523 | 8.0711 | 1590 | 1.6074 | 0.3420 | 1.6074 | 1.2678 | | 0.0523 | 8.0812 | 1592 | 1.5911 | 0.3508 | 1.5911 | 1.2614 | | 0.0523 | 8.0914 | 1594 | 1.5791 | 0.3508 | 1.5791 | 1.2566 | | 0.0523 | 8.1015 | 1596 | 1.5656 | 0.3626 | 1.5656 | 1.2512 | | 0.0523 | 8.1117 | 1598 | 1.5595 | 0.3717 | 1.5595 | 1.2488 | | 0.0523 | 8.1218 | 1600 | 1.5657 | 0.3717 | 1.5657 | 1.2513 | | 0.0523 | 8.1320 | 1602 | 1.5767 | 0.3599 | 1.5767 | 1.2557 | | 0.0523 | 8.1421 | 1604 | 1.6097 | 0.3623 | 1.6097 | 1.2687 | | 0.0523 | 8.1523 | 1606 | 1.6481 | 0.3420 | 1.6481 | 1.2838 | | 0.0523 | 8.1624 | 1608 | 1.6578 | 0.3310 | 1.6578 | 1.2876 | | 0.0523 | 8.1726 | 1610 | 1.6593 | 0.3310 | 1.6593 | 1.2881 | | 0.0523 | 8.1827 | 1612 | 1.6325 | 0.3623 | 1.6325 | 1.2777 | | 0.0523 | 8.1929 | 1614 | 1.6030 | 0.3623 | 1.6030 | 1.2661 | | 0.0523 | 8.2030 | 1616 | 1.5873 | 0.3623 | 1.5873 | 1.2599 | | 0.0523 | 8.2132 | 1618 | 1.5789 | 0.3623 | 1.5789 | 1.2566 | | 0.0523 | 8.2234 | 1620 | 1.5748 | 0.3623 | 1.5748 | 1.2549 | | 0.0523 | 8.2335 | 1622 | 1.5893 | 0.3623 | 1.5893 | 1.2607 | | 0.0523 | 8.2437 | 1624 | 1.5957 | 0.3623 | 1.5957 | 1.2632 | | 0.0523 | 8.2538 | 1626 | 1.6002 | 0.3623 | 1.6002 | 1.2650 | | 0.0523 | 8.2640 | 1628 | 1.6079 | 0.3623 | 1.6079 | 1.2680 | | 0.0523 | 8.2741 | 1630 | 1.6152 | 0.3420 | 1.6152 | 1.2709 | | 0.0523 | 8.2843 | 1632 | 1.6233 | 0.3420 | 1.6233 | 1.2741 | | 0.0523 | 8.2944 | 1634 | 1.6441 | 0.3310 | 1.6441 | 1.2822 | | 0.0523 | 8.3046 | 1636 | 1.6588 | 0.3310 | 1.6588 | 1.2879 | | 0.0523 | 8.3147 | 1638 | 1.6617 | 0.3310 | 1.6617 | 1.2891 | | 0.0523 | 8.3249 | 1640 | 1.6420 | 0.3420 | 1.6420 | 1.2814 | | 0.0523 | 8.3350 | 1642 | 1.6009 | 0.3420 | 1.6009 | 1.2653 | | 0.0523 | 8.3452 | 1644 | 1.5559 | 0.3626 | 1.5559 | 1.2474 | | 0.0523 | 8.3553 | 1646 | 1.5106 | 0.3626 | 1.5106 | 1.2291 | | 0.0523 | 8.3655 | 1648 | 1.4846 | 0.3748 | 1.4846 | 1.2184 | | 0.0523 | 8.3756 | 1650 | 1.4666 | 0.3748 | 1.4666 | 1.2110 | | 0.0523 | 8.3858 | 1652 | 1.4516 | 0.4094 | 1.4516 | 1.2048 | | 0.0523 | 8.3959 | 1654 | 1.4377 | 0.4094 | 1.4377 | 1.1990 | | 0.0523 | 8.4061 | 1656 | 1.4150 | 0.4094 | 1.4150 | 1.1895 | | 0.0523 | 8.4162 | 1658 | 1.3959 | 0.4094 | 1.3959 | 1.1815 | | 0.0523 | 8.4264 | 1660 | 1.3982 | 0.4094 | 1.3982 | 1.1825 | | 0.0523 | 8.4365 | 1662 | 1.4163 | 0.4094 | 1.4163 | 1.1901 | | 0.0523 | 8.4467 | 1664 | 1.4234 | 0.4094 | 1.4234 | 1.1931 | | 0.0523 | 8.4569 | 1666 | 1.4396 | 0.3964 | 1.4396 | 1.1998 | | 0.0523 | 8.4670 | 1668 | 1.4731 | 0.3839 | 1.4731 | 1.2137 | | 0.0523 | 8.4772 | 1670 | 1.4966 | 0.3839 | 1.4966 | 1.2234 | | 0.0523 | 8.4873 | 1672 | 1.5077 | 0.3839 | 1.5077 | 1.2279 | | 0.0523 | 8.4975 | 1674 | 1.5140 | 0.3839 | 1.5140 | 1.2304 | | 0.0523 | 8.5076 | 1676 | 1.5069 | 0.3839 | 1.5069 | 1.2276 | | 0.0523 | 8.5178 | 1678 | 1.4809 | 0.3839 | 1.4809 | 1.2169 | | 0.0523 | 8.5279 | 1680 | 1.4428 | 0.3839 | 1.4428 | 1.2012 | | 0.0523 | 8.5381 | 1682 | 1.4267 | 0.3839 | 1.4267 | 1.1944 | | 0.0523 | 8.5482 | 1684 | 1.4003 | 0.3839 | 1.4003 | 1.1833 | | 0.0523 | 8.5584 | 1686 | 1.3799 | 0.4314 | 1.3799 | 1.1747 | | 0.0523 | 8.5685 | 1688 | 1.3762 | 0.4314 | 1.3762 | 1.1731 | | 0.0523 | 8.5787 | 1690 | 1.3740 | 0.4314 | 1.3740 | 1.1722 | | 0.0523 | 8.5888 | 1692 | 1.3863 | 0.4052 | 1.3863 | 1.1774 | | 0.0523 | 8.5990 | 1694 | 1.4056 | 0.3839 | 1.4056 | 1.1856 | | 0.0523 | 8.6091 | 1696 | 1.4243 | 0.3839 | 1.4243 | 1.1934 | | 0.0523 | 8.6193 | 1698 | 1.4522 | 0.3839 | 1.4522 | 1.2051 | | 0.0523 | 8.6294 | 1700 | 1.4813 | 0.3839 | 1.4813 | 1.2171 | | 0.0523 | 8.6396 | 1702 | 1.4990 | 0.3839 | 1.4990 | 1.2243 | | 0.0523 | 8.6497 | 1704 | 1.5107 | 0.3839 | 1.5107 | 1.2291 | | 0.0523 | 8.6599 | 1706 | 1.5220 | 0.3839 | 1.5220 | 1.2337 | | 0.0523 | 8.6701 | 1708 | 1.5345 | 0.3839 | 1.5345 | 1.2387 | | 0.0523 | 8.6802 | 1710 | 1.5435 | 0.3839 | 1.5435 | 1.2424 | | 0.0523 | 8.6904 | 1712 | 1.5384 | 0.3839 | 1.5384 | 1.2403 | | 0.0523 | 8.7005 | 1714 | 1.5437 | 0.3839 | 1.5437 | 1.2425 | | 0.0523 | 8.7107 | 1716 | 1.5409 | 0.3839 | 1.5409 | 1.2413 | | 0.0523 | 8.7208 | 1718 | 1.5289 | 0.3839 | 1.5289 | 1.2365 | | 0.0523 | 8.7310 | 1720 | 1.5124 | 0.3839 | 1.5124 | 1.2298 | | 0.0523 | 8.7411 | 1722 | 1.4993 | 0.3839 | 1.4993 | 1.2245 | | 0.0523 | 8.7513 | 1724 | 1.4825 | 0.3964 | 1.4825 | 1.2176 | | 0.0523 | 8.7614 | 1726 | 1.4766 | 0.3964 | 1.4766 | 1.2152 | | 0.0523 | 8.7716 | 1728 | 1.4635 | 0.3964 | 1.4635 | 1.2097 | | 0.0523 | 8.7817 | 1730 | 1.4557 | 0.4094 | 1.4557 | 1.2065 | | 0.0523 | 8.7919 | 1732 | 1.4430 | 0.4094 | 1.4430 | 1.2013 | | 0.0523 | 8.8020 | 1734 | 1.4417 | 0.4094 | 1.4417 | 1.2007 | | 0.0523 | 8.8122 | 1736 | 1.4529 | 0.3964 | 1.4529 | 1.2053 | | 0.0523 | 8.8223 | 1738 | 1.4645 | 0.3964 | 1.4645 | 1.2102 | | 0.0523 | 8.8325 | 1740 | 1.4785 | 0.3964 | 1.4785 | 1.2159 | | 0.0523 | 8.8426 | 1742 | 1.4942 | 0.3964 | 1.4942 | 1.2224 | | 0.0523 | 8.8528 | 1744 | 1.4992 | 0.3964 | 1.4992 | 1.2244 | | 0.0523 | 8.8629 | 1746 | 1.5142 | 0.3839 | 1.5142 | 1.2305 | | 0.0523 | 8.8731 | 1748 | 1.5240 | 0.3839 | 1.5240 | 1.2345 | | 0.0523 | 8.8832 | 1750 | 1.5249 | 0.3839 | 1.5249 | 1.2349 | | 0.0523 | 8.8934 | 1752 | 1.5279 | 0.3839 | 1.5279 | 1.2361 | | 0.0523 | 8.9036 | 1754 | 1.5387 | 0.3839 | 1.5387 | 1.2404 | | 0.0523 | 8.9137 | 1756 | 1.5551 | 0.3508 | 1.5551 | 1.2470 | | 0.0523 | 8.9239 | 1758 | 1.5551 | 0.3508 | 1.5551 | 1.2470 | | 0.0523 | 8.9340 | 1760 | 1.5503 | 0.3508 | 1.5503 | 1.2451 | | 0.0523 | 8.9442 | 1762 | 1.5391 | 0.3626 | 1.5391 | 1.2406 | | 0.0523 | 8.9543 | 1764 | 1.5197 | 0.3839 | 1.5197 | 1.2328 | | 0.0523 | 8.9645 | 1766 | 1.5057 | 0.3839 | 1.5057 | 1.2271 | | 0.0523 | 8.9746 | 1768 | 1.4998 | 0.3839 | 1.4998 | 1.2247 | | 0.0523 | 8.9848 | 1770 | 1.4897 | 0.3964 | 1.4897 | 1.2205 | | 0.0523 | 8.9949 | 1772 | 1.4805 | 0.3964 | 1.4805 | 1.2168 | | 0.0523 | 9.0051 | 1774 | 1.4621 | 0.3964 | 1.4621 | 1.2092 | | 0.0523 | 9.0152 | 1776 | 1.4420 | 0.3964 | 1.4420 | 1.2008 | | 0.0523 | 9.0254 | 1778 | 1.4138 | 0.4314 | 1.4138 | 1.1890 | | 0.0523 | 9.0355 | 1780 | 1.3900 | 0.4299 | 1.3900 | 1.1790 | | 0.0523 | 9.0457 | 1782 | 1.3807 | 0.4299 | 1.3807 | 1.1750 | | 0.0523 | 9.0558 | 1784 | 1.3860 | 0.4299 | 1.3860 | 1.1773 | | 0.0523 | 9.0660 | 1786 | 1.3894 | 0.4299 | 1.3894 | 1.1787 | | 0.0523 | 9.0761 | 1788 | 1.3942 | 0.4299 | 1.3942 | 1.1808 | | 0.0523 | 9.0863 | 1790 | 1.4094 | 0.4299 | 1.4094 | 1.1872 | | 0.0523 | 9.0964 | 1792 | 1.4362 | 0.3964 | 1.4362 | 1.1984 | | 0.0523 | 9.1066 | 1794 | 1.4653 | 0.3964 | 1.4653 | 1.2105 | | 0.0523 | 9.1168 | 1796 | 1.4864 | 0.3839 | 1.4864 | 1.2192 | | 0.0523 | 9.1269 | 1798 | 1.5005 | 0.3839 | 1.5005 | 1.2249 | | 0.0523 | 9.1371 | 1800 | 1.5115 | 0.3839 | 1.5115 | 1.2294 | | 0.0523 | 9.1472 | 1802 | 1.5227 | 0.3717 | 1.5227 | 1.2340 | | 0.0523 | 9.1574 | 1804 | 1.5310 | 0.3717 | 1.5310 | 1.2373 | | 0.0523 | 9.1675 | 1806 | 1.5409 | 0.3717 | 1.5409 | 1.2413 | | 0.0523 | 9.1777 | 1808 | 1.5506 | 0.3717 | 1.5506 | 1.2452 | | 0.0523 | 9.1878 | 1810 | 1.5612 | 0.3393 | 1.5612 | 1.2495 | | 0.0523 | 9.1980 | 1812 | 1.5737 | 0.3393 | 1.5737 | 1.2545 | | 0.0523 | 9.2081 | 1814 | 1.5906 | 0.3393 | 1.5906 | 1.2612 | | 0.0523 | 9.2183 | 1816 | 1.5942 | 0.3393 | 1.5942 | 1.2626 | | 0.0523 | 9.2284 | 1818 | 1.5898 | 0.3393 | 1.5898 | 1.2609 | | 0.0523 | 9.2386 | 1820 | 1.5868 | 0.3393 | 1.5868 | 1.2597 | | 0.0523 | 9.2487 | 1822 | 1.5776 | 0.3599 | 1.5776 | 1.2560 | | 0.0523 | 9.2589 | 1824 | 1.5681 | 0.3599 | 1.5681 | 1.2523 | | 0.0523 | 9.2690 | 1826 | 1.5603 | 0.3599 | 1.5603 | 1.2491 | | 0.0523 | 9.2792 | 1828 | 1.5548 | 0.3599 | 1.5548 | 1.2469 | | 0.0523 | 9.2893 | 1830 | 1.5441 | 0.3599 | 1.5441 | 1.2426 | | 0.0523 | 9.2995 | 1832 | 1.5322 | 0.3599 | 1.5322 | 1.2378 | | 0.0523 | 9.3096 | 1834 | 1.5219 | 0.3599 | 1.5219 | 1.2336 | | 0.0523 | 9.3198 | 1836 | 1.5018 | 0.3599 | 1.5018 | 1.2255 | | 0.0523 | 9.3299 | 1838 | 1.4850 | 0.3599 | 1.4850 | 1.2186 | | 0.0523 | 9.3401 | 1840 | 1.4752 | 0.3599 | 1.4752 | 1.2146 | | 0.0523 | 9.3503 | 1842 | 1.4706 | 0.3599 | 1.4706 | 1.2127 | | 0.0523 | 9.3604 | 1844 | 1.4687 | 0.3599 | 1.4687 | 1.2119 | | 0.0523 | 9.3706 | 1846 | 1.4610 | 0.3964 | 1.4610 | 1.2087 | | 0.0523 | 9.3807 | 1848 | 1.4576 | 0.3839 | 1.4576 | 1.2073 | | 0.0523 | 9.3909 | 1850 | 1.4620 | 0.3717 | 1.4620 | 1.2092 | | 0.0523 | 9.4010 | 1852 | 1.4664 | 0.3717 | 1.4664 | 1.2110 | | 0.0523 | 9.4112 | 1854 | 1.4730 | 0.3717 | 1.4730 | 1.2137 | | 0.0523 | 9.4213 | 1856 | 1.4839 | 0.3599 | 1.4839 | 1.2181 | | 0.0523 | 9.4315 | 1858 | 1.4925 | 0.3599 | 1.4925 | 1.2217 | | 0.0523 | 9.4416 | 1860 | 1.4962 | 0.3599 | 1.4962 | 1.2232 | | 0.0523 | 9.4518 | 1862 | 1.4952 | 0.3599 | 1.4952 | 1.2228 | | 0.0523 | 9.4619 | 1864 | 1.4999 | 0.3599 | 1.4999 | 1.2247 | | 0.0523 | 9.4721 | 1866 | 1.5036 | 0.3599 | 1.5036 | 1.2262 | | 0.0523 | 9.4822 | 1868 | 1.5017 | 0.3599 | 1.5017 | 1.2254 | | 0.0523 | 9.4924 | 1870 | 1.5017 | 0.3717 | 1.5017 | 1.2255 | | 0.0523 | 9.5025 | 1872 | 1.4990 | 0.3717 | 1.4990 | 1.2243 | | 0.0523 | 9.5127 | 1874 | 1.4994 | 0.3717 | 1.4994 | 1.2245 | | 0.0523 | 9.5228 | 1876 | 1.5053 | 0.3717 | 1.5053 | 1.2269 | | 0.0523 | 9.5330 | 1878 | 1.5138 | 0.3599 | 1.5138 | 1.2304 | | 0.0523 | 9.5431 | 1880 | 1.5148 | 0.3599 | 1.5148 | 1.2308 | | 0.0523 | 9.5533 | 1882 | 1.5173 | 0.3599 | 1.5173 | 1.2318 | | 0.0523 | 9.5635 | 1884 | 1.5187 | 0.3599 | 1.5187 | 1.2324 | | 0.0523 | 9.5736 | 1886 | 1.5195 | 0.3599 | 1.5195 | 1.2327 | | 0.0523 | 9.5838 | 1888 | 1.5173 | 0.3599 | 1.5173 | 1.2318 | | 0.0523 | 9.5939 | 1890 | 1.5148 | 0.3599 | 1.5148 | 1.2308 | | 0.0523 | 9.6041 | 1892 | 1.5133 | 0.3717 | 1.5133 | 1.2302 | | 0.0523 | 9.6142 | 1894 | 1.5148 | 0.3599 | 1.5148 | 1.2308 | | 0.0523 | 9.6244 | 1896 | 1.5136 | 0.3717 | 1.5136 | 1.2303 | | 0.0523 | 9.6345 | 1898 | 1.5157 | 0.3717 | 1.5157 | 1.2312 | | 0.0523 | 9.6447 | 1900 | 1.5183 | 0.3599 | 1.5183 | 1.2322 | | 0.0523 | 9.6548 | 1902 | 1.5201 | 0.3599 | 1.5201 | 1.2329 | | 0.0523 | 9.6650 | 1904 | 1.5190 | 0.3717 | 1.5190 | 1.2325 | | 0.0523 | 9.6751 | 1906 | 1.5183 | 0.3717 | 1.5183 | 1.2322 | | 0.0523 | 9.6853 | 1908 | 1.5181 | 0.3717 | 1.5181 | 1.2321 | | 0.0523 | 9.6954 | 1910 | 1.5147 | 0.3717 | 1.5147 | 1.2307 | | 0.0523 | 9.7056 | 1912 | 1.5102 | 0.3717 | 1.5102 | 1.2289 | | 0.0523 | 9.7157 | 1914 | 1.5047 | 0.3717 | 1.5047 | 1.2267 | | 0.0523 | 9.7259 | 1916 | 1.4988 | 0.3839 | 1.4988 | 1.2242 | | 0.0523 | 9.7360 | 1918 | 1.4934 | 0.3839 | 1.4934 | 1.2220 | | 0.0523 | 9.7462 | 1920 | 1.4925 | 0.3839 | 1.4925 | 1.2217 | | 0.0523 | 9.7563 | 1922 | 1.4899 | 0.3839 | 1.4899 | 1.2206 | | 0.0523 | 9.7665 | 1924 | 1.4894 | 0.3839 | 1.4894 | 1.2204 | | 0.0523 | 9.7766 | 1926 | 1.4878 | 0.3839 | 1.4878 | 1.2197 | | 0.0523 | 9.7868 | 1928 | 1.4874 | 0.3839 | 1.4874 | 1.2196 | | 0.0523 | 9.7970 | 1930 | 1.4854 | 0.3839 | 1.4854 | 1.2188 | | 0.0523 | 9.8071 | 1932 | 1.4845 | 0.3839 | 1.4845 | 1.2184 | | 0.0523 | 9.8173 | 1934 | 1.4846 | 0.3839 | 1.4846 | 1.2185 | | 0.0523 | 9.8274 | 1936 | 1.4860 | 0.3839 | 1.4860 | 1.2190 | | 0.0523 | 9.8376 | 1938 | 1.4867 | 0.3839 | 1.4867 | 1.2193 | | 0.0523 | 9.8477 | 1940 | 1.4892 | 0.3839 | 1.4892 | 1.2203 | | 0.0523 | 9.8579 | 1942 | 1.4914 | 0.3839 | 1.4914 | 1.2212 | | 0.0523 | 9.8680 | 1944 | 1.4924 | 0.3839 | 1.4924 | 1.2216 | | 0.0523 | 9.8782 | 1946 | 1.4923 | 0.3839 | 1.4923 | 1.2216 | | 0.0523 | 9.8883 | 1948 | 1.4940 | 0.3717 | 1.4940 | 1.2223 | | 0.0523 | 9.8985 | 1950 | 1.4967 | 0.3717 | 1.4967 | 1.2234 | | 0.0523 | 9.9086 | 1952 | 1.4977 | 0.3717 | 1.4977 | 1.2238 | | 0.0523 | 9.9188 | 1954 | 1.4987 | 0.3717 | 1.4987 | 1.2242 | | 0.0523 | 9.9289 | 1956 | 1.4990 | 0.3717 | 1.4990 | 1.2243 | | 0.0523 | 9.9391 | 1958 | 1.5001 | 0.3717 | 1.5001 | 1.2248 | | 0.0523 | 9.9492 | 1960 | 1.5015 | 0.3717 | 1.5015 | 1.2254 | | 0.0523 | 9.9594 | 1962 | 1.5025 | 0.3717 | 1.5025 | 1.2258 | | 0.0523 | 9.9695 | 1964 | 1.5037 | 0.3717 | 1.5037 | 1.2263 | | 0.0523 | 9.9797 | 1966 | 1.5043 | 0.3717 | 1.5043 | 1.2265 | | 0.0523 | 9.9898 | 1968 | 1.5048 | 0.3717 | 1.5048 | 1.2267 | | 0.0523 | 10.0 | 1970 | 1.5051 | 0.3717 | 1.5051 | 1.2268 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
cardiffnlp/tweet-topic-large-multilingual
cardiffnlp
2024-11-26T11:41:39Z
36
1
null
[ "safetensors", "xlm-roberta", "text-classification", "en", "es", "ja", "el", "dataset:cardiffnlp/tweet_topic_multi", "dataset:cardiffnlp/tweet_topic_multilingual", "arxiv:2410.03075", "license:mit", "region:us" ]
text-classification
2024-10-04T01:01:39Z
--- language: - en - es - ja - el widget: - text: It is great to see athletes promoting awareness for climate change. datasets: - cardiffnlp/tweet_topic_multi - cardiffnlp/tweet_topic_multilingual license: mit metrics: - f1 pipeline_tag: text-classification --- # tweet-topic-large-multilingual This model is based on [cardiffnlp/twitter-xlm-roberta-large-2022](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-large-2022) language model and isfinetuned for multi-label topic classification in English, Spanish, Japanese, and Greek. The models is trained using [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) and [X-Topic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multilingual) datasets (see main [EMNLP 2024 reference paper](https://arxiv.org/abs/2410.03075). <b>Labels</b>: | <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> | |-----------------------------|---------------------|----------------------------|--------------------------| | 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports | | 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure | | 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life | | 4: family | 9: gaming | 14: relationships | | ## Full classification example ```python from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import expit MODEL = f"cardiffnlp/tweet-topic-large-multilingual" tokenizer = AutoTokenizer.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) class_mapping = model.config.id2label text = "It is great to see athletes promoting awareness for climate change." tokens = tokenizer(text, return_tensors='pt') output = model(**tokens) scores = output[0][0].detach().numpy() scores = expit(scores) predictions = (scores >= 0.5) * 1 # TF #tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) #class_mapping = tf_model.config.id2label #text = "It is great to see athletes promoting awareness for climate change." #tokens = tokenizer(text, return_tensors='tf') #output = tf_model(**tokens) #scores = output[0][0] #scores = expit(scores) #predictions = (scores >= 0.5) * 1 # Map to classes for i in range(len(predictions)): if predictions[i]: print(class_mapping[i]) ``` Output: ``` news_&_social_concern sports ``` ## Results on X-Topic | | English | Spanish | Japanese | Greek | |--------------|---------|---------|----------|-------| | **Macro-F1** | 60.2 | 52.9 | 57.3 | 50.3 | | **Micro-F1** | 66.3 | 67.0 | 61.4 | 73.0 | ## BibTeX entry and citation info @inproceedings{antypas-etal-2024-multilingual, title = "Multilingual Topic Classification in {X}: Dataset and Analysis", author = "Antypas, Dimosthenis and Ushio, Asahi and Barbieri, Francesco and Camacho-Collados, Jose", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.1123", pages = "20136--20152", abstract = "In the dynamic realm of social media, diverse topics are discussed daily, transcending linguistic boundaries. However, the complexities of understanding and categorising this content across various languages remain an important challenge with traditional techniques like topic modelling often struggling to accommodate this multilingual diversity. In this paper, we introduce X-Topic, a multilingual dataset featuring content in four distinct languages (English, Spanish, Japanese, and Greek), crafted for the purpose of tweet topic classification. Our dataset includes a wide range of topics, tailored for social media content, making it a valuable resource for scientists and professionals working on cross-linguistic analysis, the development of robust multilingual models, and computational scientists studying online dialogue. Finally, we leverage X-Topic to perform a comprehensive cross-linguistic and multilingual analysis, and compare the capabilities of current general- and domain-specific language models.", }
cardiffnlp/tweet-topic-base-multilingual
cardiffnlp
2024-11-26T11:41:12Z
26
0
null
[ "safetensors", "xlm-roberta", "text-classification", "en", "es", "ja", "el", "dataset:cardiffnlp/tweet_topic_multi", "dataset:cardiffnlp/tweet_topic_multilingual", "arxiv:2410.03075", "license:mit", "region:us" ]
text-classification
2024-10-04T00:56:17Z
--- language: - en - es - ja - el widget: - text: It is great to see athletes promoting awareness for climate change. datasets: - cardiffnlp/tweet_topic_multi - cardiffnlp/tweet_topic_multilingual license: mit metrics: - f1 pipeline_tag: text-classification --- # tweet-topic-base-multilingual This model is based on [cardiffnlp/twitter-xlm-roberta-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) language model trained rained on ~198M multilingual tweets and finetuned for multi-label topic classification in English, Spanish, Japanese, and Greek. The models is trained using [TweetTopic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multi) and [X-Topic](https://huggingface.co/datasets/cardiffnlp/tweet_topic_multilingual) datasets (see main [EMNLP 2024 reference paper](https://arxiv.org/abs/2410.03075)). <b>Labels</b>: | <span style="font-weight:normal">0: arts_&_culture</span> | <span style="font-weight:normal">5: fashion_&_style</span> | <span style="font-weight:normal">10: learning_&_educational</span> | <span style="font-weight:normal">15: science_&_technology</span> | |-----------------------------|---------------------|----------------------------|--------------------------| | 1: business_&_entrepreneurs | 6: film_tv_&_video | 11: music | 16: sports | | 2: celebrity_&_pop_culture | 7: fitness_&_health | 12: news_&_social_concern | 17: travel_&_adventure | | 3: diaries_&_daily_life | 8: food_&_dining | 13: other_hobbies | 18: youth_&_student_life | | 4: family | 9: gaming | 14: relationships | | ## Full classification example ```python from transformers import AutoModelForSequenceClassification, TFAutoModelForSequenceClassification from transformers import AutoTokenizer import numpy as np from scipy.special import expit MODEL = f"cardiffnlp/tweet-topic-base-multilingual" tokenizer = AutoTokenizer.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) class_mapping = model.config.id2label text = "It is great to see athletes promoting awareness for climate change." tokens = tokenizer(text, return_tensors='pt') output = model(**tokens) scores = output[0][0].detach().numpy() scores = expit(scores) predictions = (scores >= 0.5) * 1 # TF #tf_model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) #class_mapping = tf_model.config.id2label #text = "It is great to see athletes promoting awareness for climate change." #tokens = tokenizer(text, return_tensors='tf') #output = tf_model(**tokens) #scores = output[0][0] #scores = expit(scores) #predictions = (scores >= 0.5) * 1 # Map to classes for i in range(len(predictions)): if predictions[i]: print(class_mapping[i]) ``` Output: ``` news_&_social_concern sports ``` ## Results on X-Topic | | English | Spanish | Japanese | Greek | |--------------|---------|---------|----------|-------| | **Macro-F1** | 55.4 | 48.5 | 50.8 | 41.3 | | **Micro-F1** | 63.5 | 63.3 | 57.8 | 69.8 | ## BibTeX entry and citation info @inproceedings{antypas-etal-2024-multilingual, title = "Multilingual Topic Classification in {X}: Dataset and Analysis", author = "Antypas, Dimosthenis and Ushio, Asahi and Barbieri, Francesco and Camacho-Collados, Jose", editor = "Al-Onaizan, Yaser and Bansal, Mohit and Chen, Yun-Nung", booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.emnlp-main.1123", pages = "20136--20152", abstract = "In the dynamic realm of social media, diverse topics are discussed daily, transcending linguistic boundaries. However, the complexities of understanding and categorising this content across various languages remain an important challenge with traditional techniques like topic modelling often struggling to accommodate this multilingual diversity. In this paper, we introduce X-Topic, a multilingual dataset featuring content in four distinct languages (English, Spanish, Japanese, and Greek), crafted for the purpose of tweet topic classification. Our dataset includes a wide range of topics, tailored for social media content, making it a valuable resource for scientists and professionals working on cross-linguistic analysis, the development of robust multilingual models, and computational scientists studying online dialogue. Finally, we leverage X-Topic to perform a comprehensive cross-linguistic and multilingual analysis, and compare the capabilities of current general- and domain-specific language models.", }
PEGurevich/detr-finetuned-balloon-v2-flip_only
PEGurevich
2024-11-26T11:39:18Z
191
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
2024-11-26T11:39:08Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xn6o/CIFAR_Resnet18
xn6o
2024-11-26T11:25:09Z
5
0
null
[ "en", "dataset:uoft-cs/cifar10", "arxiv:1512.03385", "license:mit", "region:us" ]
null
2024-11-26T05:57:10Z
--- license: mit datasets: - uoft-cs/cifar10 language: - en --- # CIFAR10 Model with Resnet18 ## dataset ```python torchvision.datasets.CIFAR10 ``` ## Model Architecture > Reference: [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun Deep Residual Learning for Image Recognition. arXiv:1512.03385 [ResNet18](https://arxiv.org/abs/1512.03385) [resnet.py](./resnet.py) ## Evaluation [eval.ipynb](./eval.ipynb) ## results | Model | Learning Rate | Batch Size | Epochs | Accuracy | | --- | --- | --- | --- | --- | | LeNet | 0.0001 | 128 | 154 | 64.21 | | LeNet | 0.001 | 128 | 182 | 74.09 | | ResNet18 | 0.0001 | 128 | 185 | 92.77 | | ResNet18 | 0.0002 | 128 | 186 | 93.82 | | ResNet18 | 0.00025 | 128 | 192 | 93.96 | | ResNet18 | 0.0002825 | 128 | 192 | 94.24 | | ResNet18 | 0.0003125 | 128 | 187 | 94.15 |
mergekit-community/SthenoLlamaStock
mergekit-community
2024-11-26T11:23:01Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2306.01708", "base_model:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2", "base_model:merge:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2", "base_model:O1-OPEN/OpenO1-LLama-8B-v0.1", "base_model:merge:O1-OPEN/OpenO1-LLama-8B-v0.1", "base_model:Sao10K/L3-8B-Stheno-v3.2", "base_model:merge:Sao10K/L3-8B-Stheno-v3.2", "base_model:Undi95/Meta-Llama-3-8B-hf", "base_model:merge:Undi95/Meta-Llama-3-8B-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T10:24:32Z
--- base_model: - Sao10K/L3-8B-Stheno-v3.2 - Undi95/Meta-Llama-3-8B-hf - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 - O1-OPEN/OpenO1-LLama-8B-v0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Undi95/Meta-Llama-3-8B-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-hf) as a base. ### Models Merged The following models were included in the merge: * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) * [O1-OPEN/OpenO1-LLama-8B-v0.1](https://huggingface.co/O1-OPEN/OpenO1-LLama-8B-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml # Mergekit Configuration for Model Merge # Base model (primary reference model) base_model: Undi95/Meta-Llama-3-8B-hf # Merge method (using TIES for intelligent merging) merge_method: ties # Specific model configurations models: - model: Sao10K/L3-8B-Stheno-v3.2 parameters: density: 0.4 weight: 0.25 - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 parameters: density: 0.5 weight: 0.35 - model: O1-OPEN/OpenO1-LLama-8B-v0.1 parameters: density: 0.3 weight: 0.4 # Merge parameters parameters: normalize: true int8_mask: true dtype: 16 # Explicitly using 16-bit float representation # Tokenizer source (use base model's tokenizer) tokenizer_source: base ```
Imran1/Llama-3.1-Tulu-3-70B-Fp8
Imran1
2024-11-26T11:22:29Z
168
0
null
[ "safetensors", "llama", "license:apache-2.0", "compressed-tensors", "region:us" ]
null
2024-11-26T10:51:48Z
--- license: apache-2.0 --- # Imran1/Llama-3.1-Tulu-3-70B-Fp8 ## Overview **Imran1/Llama-3.1-Tulu-3-70B-Fp8** is an optimized version of the base model **allenai/Llama-3.1-Tulu-3-70B**, utilizing **FP8** (8-bit floating point) precision. This reduces memory usage and increases computational efficiency, making it ideal for large-scale inference tasks without sacrificing the model's performance. This model is well-suited for applications such as: - Conversational AI and chatbots - Instruction-based tasks - Text generation, summarization,Math, Coding, Translations and dialogue completion ## Key Features - **70 billion parameters** for powerful language generation and understanding capabilities. - **FP8 precision** for reduced memory consumption and faster inference. - Supports **tensor parallelism** for distributed computing environments. ## Usage Instructions ### 1. Running the Model with vLLM You can serve the model using **vLLM** with tensor parallelism enabled. Below is an example command for running the model: ```bash vllm serve Imran1/Llama-3.1-Tulu-3-70B-Fp8 --api-key token-abc123 --tensor-parallel-size 2 ``` ### 2. Interacting with the Model via Python (OpenAI API) Here’s an example of how to interact with the model using the OpenAI API interface: ```python from openai import OpenAI client = OpenAI( base_url="http://localhost:8000/v1", # Your vLLM server URL api_key="token-abc123", # Replace with your API key ) # Example chat completion request completion = client.chat.completions.create( model="Imran1/Llama-3.1-Tulu-3-70B-Fp8", messages=[ {"role": "user", "content": "Hello!"}, ], max_tokens=500, stream=True ) print(completion) ``` ## Performance and Efficiency - **Memory Efficiency**: FP8 precision significantly reduces memory requirements, allowing for larger batch sizes and faster processing times. - **Speed**: The FP8 version provides faster inference, making it highly suitable for real-time applications. ## Limitations - **Precision Trade-offs**: While FP8 enhances speed and memory usage, tasks that require high precision (e.g., numerical calculations) may see a slight performance degradation compared to FP16/FP32 versions. ## License This model is licensed under the [Apache-2.0](LICENSE) license. Feel free to use this model for both commercial and non-commercial purposes, ensuring compliance with the license terms. --- For more details and updates, visit the [model page on Hugging Face](https://huggingface.co/Imran1/Llama-3.1-Tulu-3-70B-Fp8).
zixianma/mma_mantis_247k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1
zixianma
2024-11-26T11:22:17Z
5
0
null
[ "safetensors", "llava", "generated_from_trainer", "base_model:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind", "base_model:finetune:TIGER-Lab/Mantis-8B-siglip-llama3-pretraind", "license:llama3", "region:us" ]
null
2024-11-25T15:39:09Z
--- license: llama3 base_model: TIGER-Lab/Mantis-8B-siglip-llama3-pretraind tags: - generated_from_trainer model-index: - name: mma_mantis_247k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://salesforceairesearch.wandb.io/thai-hoang-sf/Mantis/runs/06ppen67) # mma_mantis_247k-seq_len_8192-lr_1e-5-gl_bs_128-ep_1 This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-siglip-llama3-pretraind](https://huggingface.co/TIGER-Lab/Mantis-8B-siglip-llama3-pretraind) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.43.0 - Pytorch 2.4.0+cu121 - Datasets 2.18.0 - Tokenizers 0.19.1
speakleash/Bielik-11B-v2.3-Instruct-GGUF
speakleash
2024-11-26T11:20:10Z
3,214
24
transformers
[ "transformers", "gguf", "mistral", "text-generation", "finetuned", "pl", "base_model:speakleash/Bielik-11B-v2.3-Instruct", "base_model:quantized:speakleash/Bielik-11B-v2.3-Instruct", "license:apache-2.0", "autotrain_compatible", "region:us", "imatrix", "conversational" ]
text-generation
2024-09-05T12:49:31Z
--- language: - pl license: apache-2.0 library_name: transformers tags: - finetuned - gguf inference: false pipeline_tag: text-generation base_model: speakleash/Bielik-11B-v2.3-Instruct --- <p align="center"> <img src="https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1-GGUF/raw/main/speakleash_cyfronet.png"> </p> # Bielik-11B-v2.3-Instruct-GGUF This repo contains GGUF format model files for [SpeakLeash](https://speakleash.org/)'s [Bielik-11B-v.2.3-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct). <b><u>DISCLAIMER: Be aware that quantised models show reduced response quality and possible hallucinations!</u></b><br> ### Available quantization formats: * **q4_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K * **q5_k_m:** Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K * **q6_k:** Uses Q8_K for all tensors * **q8_0:** Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. ### Ollama Modfile The GGUF file can be used with [Ollama](https://ollama.com/). To do this, you need to import the model using the configuration defined in the Modfile. For model eg. Bielik-11B-v2.3-Instruct.Q4_K_M.gguf (full path to model location) Modfile looks like: ``` FROM ./Bielik-11B-v2.3-Instruct.Q4_K_M.gguf TEMPLATE """<s>{{ if .System }}<|start_header_id|>system<|end_header_id|> {{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|> {{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|> {{ .Response }}<|eot_id|>""" PARAMETER stop "<|start_header_id|>" PARAMETER stop "<|end_header_id|>" PARAMETER stop "<|eot_id|>" # Remeber to set low temperature for experimental models (1-3bits) PARAMETER temperature 0.1 ``` ### Model description: * **Developed by:** [SpeakLeash](https://speakleash.org/) & [ACK Cyfronet AGH](https://www.cyfronet.pl/) * **Language:** Polish * **Model type:** causal decoder-only * **Quant from:** [Bielik-11B-v2.3-Instruct](https://huggingface.co/speakleash/Bielik-11B-v2.3-Instruct) * **Finetuned from:** [Bielik-11B-v2](https://huggingface.co/speakleash/Bielik-11B-v2) * **License:** Apache 2.0 and [Terms of Use](https://bielik.ai/terms/) ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. Here is an incomplete list of clients and libraries that are known to support GGUF: * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option. * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration. * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling. * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel. * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows, macOS (Silicon) and Linux, with GPU acceleration * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection. * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration. * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server. * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use. * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note ctransformers has not been updated in a long time and does not support many recent models. ### Responsible for model quantization * [Remigiusz Kinas](https://www.linkedin.com/in/remigiusz-kinas/)<sup>SpeakLeash</sup> - team leadership, conceptualizing, calibration data preparation, process creation and quantized model delivery. ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/CPBxPce4).
frostsg/Pentesting-GPT-v1.0-LLM
frostsg
2024-11-26T11:19:02Z
11
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T11:06:42Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
xumingyu16/Baseline_2.9B
xumingyu16
2024-11-26T11:07:16Z
5
0
null
[ "pytorch", "baichuan", "custom_code", "license:apache-2.0", "region:us" ]
null
2024-11-26T06:39:36Z
--- license: apache-2.0 ---
briannlongzhao/fine_dining_textual_inversion
briannlongzhao
2024-11-26T11:03:42Z
10
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2023-11-15T04:26:30Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - briannlongzhao/fine_dining_textual_inversion These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
akshaya-244/Qwen2-VL-7B-Instruct-bnb-4bit-r8-ep3
akshaya-244
2024-11-26T10:58:30Z
6
0
transformers
[ "transformers", "safetensors", "qwen2_vl", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Qwen2-VL-7B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2-VL-7B-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-26T10:55:08Z
--- base_model: unsloth/Qwen2-VL-7B-Instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - qwen2_vl --- # Uploaded model - **Developed by:** akshaya-244 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2-VL-7B-Instruct-bnb-4bit This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Deev124/hermes-llama3-roleplay-1000-v8
Deev124
2024-11-26T10:57:57Z
20
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T10:31:48Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
infra620/llama-3-8b-bnfx-finetune
infra620
2024-11-26T10:56:14Z
39
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-10-08T11:12:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hellonlp/promcse-bert-large-zh
hellonlp
2024-11-26T10:54:19Z
111
0
transformers
[ "transformers", "pytorch", "safetensors", "bert", "sentence-similarity", "zh", "license:mit", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-01-29T08:58:56Z
--- license: mit language: - zh pipeline_tag: sentence-similarity --- # PromCSE(sup) ## Data List The following datasets are all in Chinese. | Data | size(train) | size(valid) | size(test) | |:----------------------:|:----------:|:----------:|:----------:| | [ATEC](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1gmnyz9emqOXwaHhSM9CCUA%3Fpwd%3Db17c) | 62477| 20000| 20000| | [BQ](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1M-e01yyy5NacVPrph9fbaQ%3Fpwd%3Dtis9) | 100000| 10000| 10000| | [LCQMC](https://pan.baidu.com/s/16DfE7fHrCkk4e8a2j3SYUg?pwd=bc8w ) | 238766| 8802| 12500| | [PAWSX](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1ox0tJY3ZNbevHDeAqDBOPQ%3Fpwd%3Dmgjn) | 49401| 2000| 2000| | [STS-B](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/10yfKfTtcmLQ70-jzHIln1A%3Fpwd%3Dgf8y) | 5231| 1458| 1361| | [*SNLI*](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1NOgA7JwWghiauwGAUvcm7w%3Fpwd%3Ds75v) | 146828| 2699| 2618| | [*MNLI*](https://link.zhihu.com/?target=https%3A//pan.baidu.com/s/1xjZKtWk3MAbJ6HX4pvXJ-A%3Fpwd%3D2kte) | 122547| 2932| 2397| ## Model List The evaluation dataset is in Chinese, and we used the same language model **RoBERTa Large** on different methods. In addition, considering that the test set of some datasets is small, which may lead to a large deviation in evaluation accuracy, the evaluation data here uses train, valid and test at the same time, and the final evaluation result adopts the **weighted average (w-avg)** method. | Model | STS-B(w-avg) | ATEC | BQ | LCQMC | PAWSX | Avg. | |:-----------------------:|:------------:|:-----------:|:----------|:----------|:----------:|:----------:| | [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh) | 78.61| -| -| -| -| -| | [BAAI/bge-large-zh-v1.5](https://huggingface.co/BAAI/bge-large-zh-v1.5) | 79.07| -| -| -| -| -| | [hellonlp/simcse-large-zh](https://huggingface.co/hellonlp/simcse-roberta-large-zh) | 81.32| -| -| -| -| -| | [hellonlp/promcse-large-zh](https://huggingface.co/hellonlp/promcse-bert-large-zh) | 81.63| -| -| -| -| -| ## Uses To use the tool, first install the `promcse` package from [PyPI](https://pypi.org/project/promcse/) ```bash pip install promcse ``` After installing the package, you can load our model by two lines of code ```python from promcse import PromCSE model = PromCSE("hellonlp/promcse-bert-large-zh", "cls", 10) ``` Then you can use our model for encoding sentences into embeddings ```python embeddings = model.encode("武汉是一个美丽的城市。") print(embeddings.shape) #torch.Size([1024]) ``` Compute the cosine similarities between two groups of sentences ```python sentences_a = ['你好吗'] sentences_b = ['你怎么样','我吃了一个苹果','你过的好吗','你还好吗','你', '你好不好','你好不好呢','我不开心','我好开心啊', '你吃饭了吗', '你好吗','你现在好吗','你好个鬼'] similarities = model.similarity(sentences_a, sentences_b) print(similarities) # [(1.0, '你好吗'), # (0.9324, '你好不好'), # (0.8945, '你好不好呢'), # (0.8845, '你还好吗'), # (0.8382, '你现在好吗'), # (0.8072, '你过的好吗'), # (0.7648, '你怎么样'), # (0.6736, '你'), # (0.5706, '你吃饭了吗'), # (0.5417, '你好个鬼'), # (0.3747, '我好开心啊'), # (0.0777, '我不开心'), # (0.0624, '我吃了一个苹果')] ```
tommyjin/xlm-roberta-base-finetuned-panx-ko
tommyjin
2024-11-26T10:51:49Z
124
0
transformers
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:FacebookAI/xlm-roberta-base", "base_model:finetune:FacebookAI/xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-11-25T04:50:27Z
--- library_name: transformers license: mit base_model: xlm-roberta-base tags: - generated_from_trainer model-index: - name: xlm-roberta-base-finetuned-panx-ko results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ko This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0098 - F1 Score: 0.2061 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.6034 | 1.0 | 11 | 1.3438 | 0.0044 | | 1.2536 | 2.0 | 22 | 1.1247 | 0.1608 | | 0.9796 | 3.0 | 33 | 1.0098 | 0.2061 | ### Framework versions - Transformers 4.46.2 - Pytorch 2.5.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
CATIE-AQ/QAmembert2
CATIE-AQ
2024-11-26T10:46:13Z
139
0
transformers
[ "transformers", "safetensors", "roberta", "question-answering", "fr", "dataset:etalab-ia/piaf", "dataset:fquad", "dataset:lincoln/newsquadfr", "dataset:pragnakalp/squad_v2_french_translated", "dataset:CATIE-AQ/frenchQA", "arxiv:1910.09700", "arxiv:2411.08868", "base_model:almanach/camembertv2-base", "base_model:finetune:almanach/camembertv2-base", "license:mit", "co2_eq_emissions", "endpoints_compatible", "region:us" ]
question-answering
2024-11-21T17:51:06Z
--- language: fr datasets: - etalab-ia/piaf - fquad - lincoln/newsquadfr - pragnakalp/squad_v2_french_translated - CATIE-AQ/frenchQA library_name: transformers license: mit base_model: almanach/camembertv2-base metrics: - f1 - exact_match widget: - text: Combien de personnes utilisent le français tous les jours ? context: >- Le français est une langue indo-européenne de la famille des langues romanes dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue de Molière. Le français est parlé, en 2023, sur tous les continents par environ 321 millions de personnes : 235 millions l'emploient quotidiennement et 90 millions en sont des locuteurs natifs. En 2018, 80 millions d'élèves et étudiants s'instruisent en français dans le monde. Selon l'Organisation internationale de la francophonie (OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050. co2_eq_emissions: 66 --- # QAmemBERT2 ## Model Description We present **QAmemBERT2**, which is a [CamemBERT v2 base](https://huggingface.co/almanach/camembertv2-base) fine-tuned for the Question-Answering task for the French language on four French Q&A datasets composed of contexts and questions with their answers inside the context (= SQuAD 1.0 format) but also contexts and questions with their answers not inside the context (= SQuAD 2.0 format). All these datasets were concatenated into a single dataset that we called [frenchQA](https://huggingface.co/datasets/CATIE-AQ/frenchQA). This represents a total of over **221,348 context/question/answer triplets used to finetune this model and 6,376 to test it**. Our methodology is described in a blog post available in [English](https://blog.vaniila.ai/en/QA_en/) or [French](https://blog.vaniila.ai/QA/). ## Results (french QA test split) | Model | Parameters | Context | Exact_match | F1 | Answer_F1 | NoAnswer_F1 | | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | [etalab/camembert-base-squadFR-fquad-piaf](https://huggingface.co/AgentPublic/camembert-base-squadFR-fquad-piaf) | 110M | 512 tokens | 39.30 | 51.55 | 79.54 | 23.58 | [QAmembert](https://huggingface.co/CATIE-AQ/QAmembert)| 110M | 512 tokens | 77.14 | 86.88 | 75.66 | 98.11 | [QAmembert2](https://huggingface.co/CATIE-AQ/QAmembert2) (this version) | 112M | 1024 tokens | 76.47 | 88.25 | 78.66 | 97.84 | [QAmembert-large](https://huggingface.co/CATIE-AQ/QAmembert-large)| 336M | 512 tokens | 77.14 | 88.74 | 78.83 | **98.65** | [QAmemberta](https://huggingface.co/CATIE-AQ/QAmemberta) | 111M | 1024 tokens | **78.18** | **89.53** | **81.40** | 97.64 Looking at the “Answer_f1” column, Etalab's model appears to be competitive on texts where the answer to the question is indeed in the text provided (it does better than QAmemBERT-large, for example). However, the fact that it doesn't handle texts where the answer to the question is not in the text provided is a drawback. In all cases, whether in terms of metrics, number of parameters or context size, QAmemBERTa achieves the best results. We therefore invite the reader to choose this model. ### Usage ```python from transformers import pipeline qa = pipeline('question-answering', model='CATIE-AQ/QAmembert2', tokenizer='CATIE-AQ/QAmembert2') result = qa({ 'question': "Combien de personnes utilisent le français tous les jours ?", 'context': "Le français est une langue indo-européenne de la famille des langues romanes dont les locuteurs sont appelés francophones. Elle est parfois surnommée la langue de Molière. Le français est parlé, en 2023, sur tous les continents par environ 321 millions de personnes : 235 millions l'emploient quotidiennement et 90 millions en sont des locuteurs natifs. En 2018, 80 millions d'élèves et étudiants s'instruisent en français dans le monde. Selon l'Organisation internationale de la francophonie (OIF), il pourrait y avoir 700 millions de francophones sur Terre en 2050." }) if result['score'] < 0.01: print("La réponse n'est pas dans le contexte fourni.") else : print(result['answer']) ``` ### Try it through Space A Space has been created to test the model. It is available [here](https://huggingface.co/spaces/CATIE-AQ/Qamembert). ## Environmental Impact *Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.* - **Hardware Type:** A100 PCIe 40/80GB - **Hours used:** 4h and 47 min - **Cloud Provider:** Private Infrastructure - **Carbon Efficiency (kg/kWh):** 0.055kg (estimated from [electricitymaps](https://app.electricitymaps.com/zone/FR) ; we take the carbon intensity in France for November 21, 2024.) - **Carbon Emitted** *(Power consumption x Time x Carbon produced based on location of power grid)*: **0.066 kg eq. CO2** ## Citations ### QAmemBERT2 & QAmemBERTa ``` @misc {qamemberta2024, author = { {BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { QAmemberta (Revision 976a70b) }, year = 2024, url = { https://huggingface.co/CATIE-AQ/QAmemberta }, doi = { 10.57967/hf/3639 }, publisher = { Hugging Face } } ``` ### QAmemBERT ``` @misc {qamembert2023, author = { {ALBAR, Boris and BEDU, Pierre and BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { QAmembert (Revision 9685bc3) }, year = 2023, url = { https://huggingface.co/CATIE-AQ/QAmembert}, doi = { 10.57967/hf/0821 }, publisher = { Hugging Face } } ``` ### CamemBERT ``` @inproceedings{martin2020camembert, title={CamemBERT: a Tasty French Language Model}, author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t}, booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics}, year={2020} } ``` ### CamemBERT 2.0 ``` @misc{antoun2024camembert20smarterfrench, title={CamemBERT 2.0: A Smarter French Language Model Aged to Perfection}, author={Wissam Antoun and Francis Kulumba and Rian Touchent and Éric de la Clergerie and Benoît Sagot and Djamé Seddah}, year={2024}, eprint={2411.08868}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2411.08868}, } ``` ### frenchQA ``` @misc {frenchQA2023, author = { {ALBAR, Boris and BEDU, Pierre and BOURDOIS, Loïck} }, organization = { {Centre Aquitain des Technologies de l'Information et Electroniques} }, title = { frenchQA (Revision 6249cd5) }, year = 2023, url = { https://huggingface.co/CATIE-AQ/frenchQA }, doi = { 10.57967/hf/0862 }, publisher = { Hugging Face } } ``` ### PIAF ``` @inproceedings{KeraronLBAMSSS20, author = {Rachel Keraron and Guillaume Lancrenon and Mathilde Bras and Fr{\'{e}}d{\'{e}}ric Allary and Gilles Moyse and Thomas Scialom and Edmundo{-}Pavel Soriano{-}Morales and Jacopo Staiano}, title = {Project {PIAF:} Building a Native French Question-Answering Dataset}, booktitle = {{LREC}}, pages = {5481--5490}, publisher = {European Language Resources Association}, year = {2020} } ``` ### FQuAD ``` @article{dHoffschmidt2020FQuADFQ, title={FQuAD: French Question Answering Dataset}, author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich}, journal={ArXiv}, year={2020}, volume={abs/2002.06071} } ``` ### lincoln/newsquadfr ``` Hugging Face repository: https://hf.co/datasets/lincoln/newsquadfr ``` ### pragnakalp/squad_v2_french_translated ``` Hugging Face repository: https://hf.co/datasets/pragnakalp/squad_v2_french_translated ``` ## License MIT
AliSaadatV/LoRA_esm2_t6_8M_UR50D-finetunedv2-ACT_SITE
AliSaadatV
2024-11-26T10:43:59Z
8
0
peft
[ "peft", "safetensors", "esm", "arxiv:1910.09700", "base_model:facebook/esm2_t6_8M_UR50D", "base_model:adapter:facebook/esm2_t6_8M_UR50D", "region:us" ]
null
2024-11-26T10:41:18Z
--- base_model: facebook/esm2_t6_8M_UR50D library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
peft-internal-testing/tiny_T5ForSeq2SeqLM-lora
peft-internal-testing
2024-11-26T10:31:55Z
24,660
0
peft
[ "peft", "safetensors", "region:us" ]
null
2023-07-13T13:44:40Z
--- library_name: peft --- ## Training procedure ### Framework versions - PEFT 0.4.0.dev0 - Updated: PEFT 0.13.3.dev0 (6a533b783dc757705df9f8698218abec40e54683)
mradermacher/magnum-v4-9b-GGUF
mradermacher
2024-11-26T10:30:16Z
62
2
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_16k_llama_v1.1", "dataset:NewEden/Claude-Instruct-5K", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-9b", "base_model:quantized:anthracite-org/magnum-v4-9b", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T08:04:04Z
--- base_model: anthracite-org/magnum-v4-9b datasets: - anthracite-org/c2_logs_16k_llama_v1.1 - NewEden/Claude-Instruct-5K - anthracite-org/kalo-opus-instruct-22k-no-refusal - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anthracite-org/magnum-v4-9b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v4-9b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q2_K.gguf) | Q2_K | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q3_K_S.gguf) | Q3_K_S | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q3_K_L.gguf) | Q3_K_L | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.IQ4_XS.gguf) | IQ4_XS | 5.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q5_K_S.gguf) | Q5_K_S | 6.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q5_K_M.gguf) | Q5_K_M | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q6_K.gguf) | Q6_K | 7.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-9b-GGUF/resolve/main/magnum-v4-9b.f16.gguf) | f16 | 18.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
PARKISU/klue-roberta-base-klue-sts
PARKISU
2024-11-26T10:30:10Z
5
0
sentence-transformers
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "transformers", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
2024-11-26T10:29:44Z
--- library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 657 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 100, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
mradermacher/magnum-v4-27b-GGUF
mradermacher
2024-11-26T10:30:07Z
68
1
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_16k_llama_v1.1", "dataset:NewEden/Claude-Instruct-5K", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-27b", "base_model:quantized:anthracite-org/magnum-v4-27b", "license:gemma", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T08:27:02Z
--- base_model: anthracite-org/magnum-v4-27b datasets: - anthracite-org/c2_logs_16k_llama_v1.1 - NewEden/Claude-Instruct-5K - anthracite-org/kalo-opus-instruct-22k-no-refusal - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: gemma quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anthracite-org/magnum-v4-27b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v4-27b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q2_K.gguf) | Q2_K | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q3_K_S.gguf) | Q3_K_S | 12.3 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q3_K_M.gguf) | Q3_K_M | 13.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q3_K_L.gguf) | Q3_K_L | 14.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.IQ4_XS.gguf) | IQ4_XS | 15.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q4_K_S.gguf) | Q4_K_S | 15.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q4_K_M.gguf) | Q4_K_M | 16.7 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q5_K_S.gguf) | Q5_K_S | 19.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q5_K_M.gguf) | Q5_K_M | 19.5 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q6_K.gguf) | Q6_K | 22.4 | very good quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-27b-GGUF/resolve/main/magnum-v4-27b.Q8_0.gguf) | Q8_0 | 29.0 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/magnum-v4-123b-GGUF
mradermacher
2024-11-26T10:30:03Z
13
0
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_16k_mistral-large_v1.2", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-123b", "base_model:quantized:anthracite-org/magnum-v4-123b", "license:other", "endpoints_compatible", "region:us", "conversational" ]
null
2024-10-20T08:48:20Z
--- base_model: anthracite-org/magnum-v4-123b datasets: - anthracite-org/c2_logs_16k_mistral-large_v1.2 - anthracite-org/kalo-opus-instruct-22k-no-refusal - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: other license_name: mrl quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/anthracite-org/magnum-v4-123b <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/magnum-v4-123b-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q2_K.gguf) | Q2_K | 45.3 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_S.gguf.part2of2) | Q3_K_S | 52.9 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_M.gguf.part2of2) | Q3_K_M | 59.2 | lower quality | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q3_K_L.gguf.part2of2) | Q3_K_L | 64.7 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.IQ4_XS.gguf.part2of2) | IQ4_XS | 66.1 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q4_K_S.gguf.part2of2) | Q4_K_S | 69.7 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q4_K_M.gguf.part2of2) | Q4_K_M | 73.3 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q5_K_S.gguf.part2of2) | Q5_K_S | 84.5 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q5_K_M.gguf.part2of2) | Q5_K_M | 86.6 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q6_K.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q6_K.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q6_K.gguf.part3of3) | Q6_K | 100.7 | very good quality | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q8_0.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q8_0.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/magnum-v4-123b-GGUF/resolve/main/magnum-v4-123b.Q8_0.gguf.part3of3) | Q8_0 | 130.4 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
mradermacher/magnum-v4-72b-i1-GGUF
mradermacher
2024-11-26T10:29:38Z
387
2
transformers
[ "transformers", "gguf", "chat", "en", "dataset:anthracite-org/c2_logs_32k_llama3_qwen2_v1.2", "dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal", "dataset:lodrick-the-lafted/kalo-opus-instruct-3k-filtered", "dataset:anthracite-org/nopm_claude_writing_fixed", "dataset:anthracite-org/kalo_opus_misc_240827", "dataset:anthracite-org/kalo_misc_part2", "base_model:anthracite-org/magnum-v4-72b", "base_model:quantized:anthracite-org/magnum-v4-72b", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
2024-10-20T22:07:41Z
--- base_model: anthracite-org/magnum-v4-72b datasets: - anthracite-org/c2_logs_32k_llama3_qwen2_v1.2 - anthracite-org/kalo-opus-instruct-22k-no-refusal - lodrick-the-lafted/kalo-opus-instruct-3k-filtered - anthracite-org/nopm_claude_writing_fixed - anthracite-org/kalo_opus_misc_240827 - anthracite-org/kalo_misc_part2 language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - chat --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/anthracite-org/magnum-v4-72b <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/magnum-v4-72b-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | | | [PART 1](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/magnum-v4-72b-i1-GGUF/resolve/main/magnum-v4-72b.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
BabakBagheriGisour/QuizErsteller
BabakBagheriGisour
2024-11-26T10:28:02Z
51
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "quiz_maker", "quiz_ersteller", "multiple-choice-question-generation", "educational-tools", "pdf-to-quiz", "gpt-neo", "text-analysis", "natural-language-processing", "language-model", "quiz-generator", "AI-education", "question-generation", "fine-tuned-model", "text2text-generation", "de", "dataset:BabakBagheriGisour/Networkplus_Dataset", "base_model:EleutherAI/gpt-neo-125m", "base_model:finetune:EleutherAI/gpt-neo-125m", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-20T16:56:45Z
--- license: apache-2.0 language: - de metrics: - accuracy - bleu - f1 base_model: - EleutherAI/gpt-neo-125m pipeline_tag: text2text-generation library_name: transformers tags: - quiz_maker - quiz_ersteller - multiple-choice-question-generation - educational-tools - pdf-to-quiz - gpt-neo - text-analysis - natural-language-processing - language-model - quiz-generator - AI-education - transformers - question-generation - fine-tuned-model datasets: - BabakBagheriGisour/Networkplus_Dataset ---
marwaALzaabi/plant-disease-detection-vit
marwaALzaabi
2024-11-26T10:25:12Z
203
0
transformers
[ "transformers", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-large-patch16-224-in21k", "base_model:finetune:google/vit-large-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-11-26T10:08:44Z
--- library_name: transformers license: apache-2.0 base_model: google/vit-large-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: plant-disease-detection-vit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # plant-disease-detection-vit This model is a fine-tuned version of [google/vit-large-patch16-224-in21k](https://huggingface.co/google/vit-large-patch16-224-in21k) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0002 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0007 | 1.0 | 45 | 0.0004 | 1.0 | | 0.0004 | 2.0 | 90 | 0.0317 | 0.9889 | | 0.0003 | 3.0 | 135 | 0.0003 | 1.0 | | 0.0002 | 4.0 | 180 | 0.0002 | 1.0 | | 0.0002 | 5.0 | 225 | 0.0002 | 1.0 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.4.1+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
peft-internal-testing/tiny-LlavaForConditionalGeneration
peft-internal-testing
2024-11-26T10:19:02Z
6,804
0
transformers
[ "transformers", "safetensors", "llava", "image-text-to-text", "trl", "conversational", "endpoints_compatible", "region:us" ]
image-text-to-text
2024-11-26T10:17:30Z
--- library_name: transformers tags: - trl --- # Tiny LlavaForConditionalGeneration PEFT copy of trl-internal-testing/tiny-LlavaForConditionalGeneration, minimal model built for unit tests.
Tippawan/mT5-mt-v1-26nov24
Tippawan
2024-11-26T10:17:01Z
105
0
transformers
[ "transformers", "safetensors", "mt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2024-11-26T10:15:52Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
briannlongzhao/cathedral_textual_inversion
briannlongzhao
2024-11-26T10:13:53Z
14
0
diffusers
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "base_model:stabilityai/stable-diffusion-2-1", "base_model:adapter:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-11-17T02:03:10Z
--- license: creativeml-openrail-m base_model: stabilityai/stable-diffusion-2-1 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - textual_inversion inference: true --- # Textual inversion text2image fine-tuning - briannlongzhao/cathedral_textual_inversion These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
ngia/whisper-small-wolof-v2
ngia
2024-11-26T10:13:26Z
77
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "fr", "dataset:IndabaxSenegal/asr-wolof-dataset", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-24T23:56:57Z
--- library_name: transformers language: - fr license: apache-2.0 base_model: openai/whisper-small tags: - generated_from_trainer datasets: - IndabaxSenegal/asr-wolof-dataset model-index: - name: Whisper Small WO - Team results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small WO - Team This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ASR Wolof Dataset dataset. It achieves the following results on the evaluation set: - eval_loss: 1.8224 - eval_wer: 111.3192 - eval_runtime: 3443.366 - eval_samples_per_second: 0.754 - eval_steps_per_second: 0.094 - epoch: 1.5385 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0
gpellejero/model_vllm
gpellejero
2024-11-26T10:10:37Z
9
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-11-26T09:24:54Z
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
bryandts/whisper-base-en-india-accent-svarah
bryandts
2024-11-26T10:09:52Z
78
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "dataset:Bhargav0044/svarah1", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-11-25T14:11:10Z
--- library_name: transformers license: apache-2.0 base_model: openai/whisper-base tags: - generated_from_trainer metrics: - wer datasets: - Bhargav0044/svarah1 language: - en model-index: - name: whisper-base-en-india-accent-svarah results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-base-en-india-accent-svarah This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an [svarah](https://huggingface.co/datasets/Bhargav0044/svarah1) dataset. It achieves the following results on the evaluation set: - Loss: 0.3400 - Wer: 16.3057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.8871 | 1.0 | 47 | 0.8439 | 26.5605 | | 0.5938 | 2.0 | 94 | 0.4767 | 21.4809 | | 0.402 | 3.0 | 141 | 0.4090 | 18.8854 | | 0.3359 | 4.0 | 188 | 0.3824 | 17.8503 | | 0.2878 | 5.0 | 235 | 0.3632 | 17.4841 | | 0.2416 | 6.0 | 282 | 0.3505 | 16.9904 | | 0.1986 | 7.0 | 329 | 0.3422 | 16.7834 | | 0.1596 | 8.0 | 376 | 0.3400 | 16.3057 | | 0.1232 | 9.0 | 423 | 0.3427 | 16.6242 | | 0.0901 | 10.0 | 470 | 0.3610 | 16.7357 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.5.1+cu121 - Datasets 3.1.0 - Tokenizers 0.20.3
MayBashendy/ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task1_organization_fold0
MayBashendy
2024-11-26T10:02:33Z
164
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:aubmindlab/bert-base-arabertv02", "base_model:finetune:aubmindlab/bert-base-arabertv02", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-11-26T09:39:42Z
--- library_name: transformers base_model: aubmindlab/bert-base-arabertv02 tags: - generated_from_trainer model-index: - name: ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task1_organization_fold0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ArabicNewSplits_FineTuningAraBERT_AugV5_k30_task1_organization_fold0 This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7266 - Qwk: 0.3237 - Mse: 1.7266 - Rmse: 1.3140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:| | No log | 0.0135 | 2 | 5.2335 | -0.1107 | 5.2335 | 2.2877 | | No log | 0.0270 | 4 | 3.1817 | -0.0095 | 3.1817 | 1.7837 | | No log | 0.0405 | 6 | 1.8187 | -0.0855 | 1.8187 | 1.3486 | | No log | 0.0541 | 8 | 1.8235 | -0.0964 | 1.8235 | 1.3504 | | No log | 0.0676 | 10 | 1.5957 | -0.0072 | 1.5957 | 1.2632 | | No log | 0.0811 | 12 | 1.3978 | 0.0322 | 1.3978 | 1.1823 | | No log | 0.0946 | 14 | 1.3388 | 0.0160 | 1.3388 | 1.1571 | | No log | 0.1081 | 16 | 1.6615 | -0.0621 | 1.6615 | 1.2890 | | No log | 0.1216 | 18 | 2.0941 | -0.1053 | 2.0941 | 1.4471 | | No log | 0.1351 | 20 | 2.0531 | -0.0312 | 2.0531 | 1.4329 | | No log | 0.1486 | 22 | 1.8380 | 0.0693 | 1.8380 | 1.3557 | | No log | 0.1622 | 24 | 1.4679 | 0.0 | 1.4679 | 1.2116 | | No log | 0.1757 | 26 | 1.2541 | 0.0 | 1.2541 | 1.1199 | | No log | 0.1892 | 28 | 1.2979 | 0.0 | 1.2979 | 1.1393 | | No log | 0.2027 | 30 | 1.2625 | 0.0 | 1.2625 | 1.1236 | | No log | 0.2162 | 32 | 1.3548 | 0.0 | 1.3548 | 1.1640 | | No log | 0.2297 | 34 | 1.3892 | 0.0 | 1.3892 | 1.1786 | | No log | 0.2432 | 36 | 1.3796 | 0.0 | 1.3796 | 1.1746 | | No log | 0.2568 | 38 | 1.4907 | 0.0337 | 1.4907 | 1.2210 | | No log | 0.2703 | 40 | 1.4659 | 0.0969 | 1.4659 | 1.2107 | | No log | 0.2838 | 42 | 1.3346 | 0.1335 | 1.3346 | 1.1552 | | No log | 0.2973 | 44 | 1.2884 | 0.1673 | 1.2884 | 1.1351 | | No log | 0.3108 | 46 | 1.1205 | 0.2274 | 1.1205 | 1.0585 | | No log | 0.3243 | 48 | 1.0070 | 0.2174 | 1.0070 | 1.0035 | | No log | 0.3378 | 50 | 1.0672 | 0.1800 | 1.0672 | 1.0330 | | No log | 0.3514 | 52 | 1.1199 | 0.0727 | 1.1199 | 1.0583 | | No log | 0.3649 | 54 | 0.9483 | 0.3864 | 0.9483 | 0.9738 | | No log | 0.3784 | 56 | 0.9642 | 0.3732 | 0.9642 | 0.9820 | | No log | 0.3919 | 58 | 1.2160 | 0.1665 | 1.2160 | 1.1027 | | No log | 0.4054 | 60 | 1.4818 | 0.1590 | 1.4818 | 1.2173 | | No log | 0.4189 | 62 | 1.7349 | 0.2441 | 1.7349 | 1.3172 | | No log | 0.4324 | 64 | 1.6944 | 0.1590 | 1.6944 | 1.3017 | | No log | 0.4459 | 66 | 1.5373 | 0.1335 | 1.5373 | 1.2399 | | No log | 0.4595 | 68 | 1.3582 | 0.0337 | 1.3582 | 1.1654 | | No log | 0.4730 | 70 | 1.2759 | 0.0337 | 1.2759 | 1.1296 | | No log | 0.4865 | 72 | 1.2605 | 0.0 | 1.2605 | 1.1227 | | No log | 0.5 | 74 | 1.2404 | 0.0 | 1.2404 | 1.1137 | | No log | 0.5135 | 76 | 1.2588 | 0.0337 | 1.2588 | 1.1220 | | No log | 0.5270 | 78 | 1.3918 | 0.0337 | 1.3918 | 1.1798 | | No log | 0.5405 | 80 | 1.3834 | 0.0337 | 1.3834 | 1.1762 | | No log | 0.5541 | 82 | 1.3473 | 0.0 | 1.3473 | 1.1607 | | No log | 0.5676 | 84 | 1.3207 | 0.0 | 1.3207 | 1.1492 | | No log | 0.5811 | 86 | 1.2702 | 0.0 | 1.2702 | 1.1270 | | No log | 0.5946 | 88 | 1.2076 | 0.0 | 1.2076 | 1.0989 | | No log | 0.6081 | 90 | 1.1803 | 0.0 | 1.1803 | 1.0864 | | No log | 0.6216 | 92 | 1.2039 | 0.0 | 1.2039 | 1.0972 | | No log | 0.6351 | 94 | 1.2052 | 0.0 | 1.2052 | 1.0978 | | No log | 0.6486 | 96 | 1.2167 | 0.0 | 1.2167 | 1.1030 | | No log | 0.6622 | 98 | 1.2227 | 0.0 | 1.2227 | 1.1057 | | No log | 0.6757 | 100 | 1.1598 | 0.0607 | 1.1598 | 1.0769 | | No log | 0.6892 | 102 | 1.0995 | 0.1579 | 1.0995 | 1.0486 | | No log | 0.7027 | 104 | 1.0991 | 0.0997 | 1.0991 | 1.0484 | | No log | 0.7162 | 106 | 1.1314 | 0.1491 | 1.1314 | 1.0637 | | No log | 0.7297 | 108 | 1.1349 | 0.1491 | 1.1349 | 1.0653 | | No log | 0.7432 | 110 | 1.1629 | 0.1234 | 1.1629 | 1.0784 | | No log | 0.7568 | 112 | 1.1748 | 0.0975 | 1.1748 | 1.0839 | | No log | 0.7703 | 114 | 1.1791 | 0.0698 | 1.1791 | 1.0859 | | No log | 0.7838 | 116 | 1.2089 | 0.0337 | 1.2089 | 1.0995 | | No log | 0.7973 | 118 | 1.2155 | 0.0698 | 1.2155 | 1.1025 | | No log | 0.8108 | 120 | 1.2039 | 0.0698 | 1.2039 | 1.0972 | | No log | 0.8243 | 122 | 1.1563 | 0.1290 | 1.1563 | 1.0753 | | No log | 0.8378 | 124 | 1.1182 | 0.1290 | 1.1182 | 1.0574 | | No log | 0.8514 | 126 | 1.1210 | 0.1477 | 1.1210 | 1.0588 | | No log | 0.8649 | 128 | 1.1904 | 0.1247 | 1.1904 | 1.0910 | | No log | 0.8784 | 130 | 1.1137 | 0.2347 | 1.1137 | 1.0553 | | No log | 0.8919 | 132 | 1.0093 | 0.2138 | 1.0093 | 1.0046 | | No log | 0.9054 | 134 | 1.0364 | 0.2435 | 1.0364 | 1.0180 | | No log | 0.9189 | 136 | 1.0716 | 0.1727 | 1.0716 | 1.0352 | | No log | 0.9324 | 138 | 1.0136 | 0.2329 | 1.0136 | 1.0068 | | No log | 0.9459 | 140 | 0.9620 | 0.2977 | 0.9620 | 0.9808 | | No log | 0.9595 | 142 | 0.9558 | 0.3251 | 0.9558 | 0.9777 | | No log | 0.9730 | 144 | 0.9550 | 0.3401 | 0.9550 | 0.9773 | | No log | 0.9865 | 146 | 0.9420 | 0.3686 | 0.9420 | 0.9706 | | No log | 1.0 | 148 | 0.9246 | 0.3701 | 0.9246 | 0.9616 | | No log | 1.0135 | 150 | 0.9180 | 0.3266 | 0.9180 | 0.9581 | | No log | 1.0270 | 152 | 0.9575 | 0.2593 | 0.9575 | 0.9785 | | No log | 1.0405 | 154 | 1.0584 | 0.2174 | 1.0584 | 1.0288 | | No log | 1.0541 | 156 | 1.0668 | 0.2174 | 1.0668 | 1.0328 | | No log | 1.0676 | 158 | 1.0137 | 0.2297 | 1.0137 | 1.0068 | | No log | 1.0811 | 160 | 0.9387 | 0.2506 | 0.9387 | 0.9689 | | No log | 1.0946 | 162 | 0.8953 | 0.3671 | 0.8953 | 0.9462 | | No log | 1.1081 | 164 | 0.9150 | 0.2730 | 0.9150 | 0.9565 | | No log | 1.1216 | 166 | 0.9510 | 0.3003 | 0.9510 | 0.9752 | | No log | 1.1351 | 168 | 1.0276 | 0.2626 | 1.0276 | 1.0137 | | No log | 1.1486 | 170 | 1.1106 | 0.2525 | 1.1106 | 1.0538 | | No log | 1.1622 | 172 | 1.1181 | 0.2626 | 1.1181 | 1.0574 | | No log | 1.1757 | 174 | 1.1307 | 0.2508 | 1.1307 | 1.0633 | | No log | 1.1892 | 176 | 1.2078 | 0.2103 | 1.2078 | 1.0990 | | No log | 1.2027 | 178 | 1.1869 | 0.1618 | 1.1869 | 1.0894 | | No log | 1.2162 | 180 | 1.1069 | 0.2204 | 1.1069 | 1.0521 | | No log | 1.2297 | 182 | 1.0181 | 0.2508 | 1.0181 | 1.0090 | | No log | 1.2432 | 184 | 0.9642 | 0.2711 | 0.9642 | 0.9819 | | No log | 1.2568 | 186 | 0.9634 | 0.2711 | 0.9634 | 0.9815 | | No log | 1.2703 | 188 | 1.0079 | 0.2609 | 1.0079 | 1.0039 | | No log | 1.2838 | 190 | 1.0805 | 0.1785 | 1.0805 | 1.0395 | | No log | 1.2973 | 192 | 1.1166 | 0.2525 | 1.1166 | 1.0567 | | No log | 1.3108 | 194 | 1.1364 | 0.1926 | 1.1364 | 1.0660 | | No log | 1.3243 | 196 | 1.1078 | 0.2406 | 1.1078 | 1.0525 | | No log | 1.3378 | 198 | 1.0697 | 0.2508 | 1.0697 | 1.0342 | | No log | 1.3514 | 200 | 1.0552 | 0.2609 | 1.0552 | 1.0272 | | No log | 1.3649 | 202 | 1.0371 | 0.2339 | 1.0371 | 1.0184 | | No log | 1.3784 | 204 | 1.0627 | 0.1983 | 1.0627 | 1.0309 | | No log | 1.3919 | 206 | 1.0455 | 0.1609 | 1.0455 | 1.0225 | | No log | 1.4054 | 208 | 0.9850 | 0.3432 | 0.9850 | 0.9925 | | No log | 1.4189 | 210 | 1.0006 | 0.2651 | 1.0006 | 1.0003 | | No log | 1.4324 | 212 | 1.0555 | 0.2756 | 1.0555 | 1.0274 | | No log | 1.4459 | 214 | 1.1072 | 0.3301 | 1.1072 | 1.0522 | | No log | 1.4595 | 216 | 1.1680 | 0.3846 | 1.1680 | 1.0807 | | No log | 1.4730 | 218 | 1.2022 | 0.3411 | 1.2022 | 1.0964 | | No log | 1.4865 | 220 | 1.1693 | 0.3582 | 1.1693 | 1.0814 | | No log | 1.5 | 222 | 1.1166 | 0.3520 | 1.1166 | 1.0567 | | No log | 1.5135 | 224 | 1.1150 | 0.3661 | 1.1150 | 1.0560 | | No log | 1.5270 | 226 | 1.1458 | 0.4074 | 1.1458 | 1.0704 | | No log | 1.5405 | 228 | 1.1801 | 0.3733 | 1.1801 | 1.0863 | | No log | 1.5541 | 230 | 1.1513 | 0.3733 | 1.1513 | 1.0730 | | No log | 1.5676 | 232 | 1.0663 | 0.3826 | 1.0663 | 1.0326 | | No log | 1.5811 | 234 | 1.0451 | 0.3941 | 1.0451 | 1.0223 | | No log | 1.5946 | 236 | 1.0538 | 0.3952 | 1.0538 | 1.0266 | | No log | 1.6081 | 238 | 1.0993 | 0.2603 | 1.0993 | 1.0485 | | No log | 1.6216 | 240 | 1.0582 | 0.2851 | 1.0582 | 1.0287 | | No log | 1.6351 | 242 | 1.0215 | 0.2627 | 1.0215 | 1.0107 | | No log | 1.6486 | 244 | 0.9285 | 0.3444 | 0.9285 | 0.9636 | | No log | 1.6622 | 246 | 0.9246 | 0.3709 | 0.9246 | 0.9616 | | No log | 1.6757 | 248 | 0.9692 | 0.4396 | 0.9692 | 0.9845 | | No log | 1.6892 | 250 | 1.1109 | 0.4063 | 1.1109 | 1.0540 | | No log | 1.7027 | 252 | 1.1804 | 0.3636 | 1.1804 | 1.0864 | | No log | 1.7162 | 254 | 1.2655 | 0.3023 | 1.2655 | 1.1249 | | No log | 1.7297 | 256 | 1.3397 | 0.2408 | 1.3397 | 1.1575 | | No log | 1.7432 | 258 | 1.3037 | 0.2828 | 1.3037 | 1.1418 | | No log | 1.7568 | 260 | 1.2407 | 0.3209 | 1.2407 | 1.1139 | | No log | 1.7703 | 262 | 1.2004 | 0.2991 | 1.2004 | 1.0956 | | No log | 1.7838 | 264 | 1.1482 | 0.3398 | 1.1482 | 1.0715 | | No log | 1.7973 | 266 | 1.1552 | 0.3436 | 1.1552 | 1.0748 | | No log | 1.8108 | 268 | 1.1547 | 0.3590 | 1.1547 | 1.0746 | | No log | 1.8243 | 270 | 1.1758 | 0.3678 | 1.1758 | 1.0844 | | No log | 1.8378 | 272 | 1.2349 | 0.2730 | 1.2349 | 1.1113 | | No log | 1.8514 | 274 | 1.2174 | 0.2477 | 1.2174 | 1.1033 | | No log | 1.8649 | 276 | 1.2113 | 0.2477 | 1.2113 | 1.1006 | | No log | 1.8784 | 278 | 1.1779 | 0.2886 | 1.1779 | 1.0853 | | No log | 1.8919 | 280 | 1.1816 | 0.2635 | 1.1816 | 1.0870 | | No log | 1.9054 | 282 | 1.1797 | 0.2857 | 1.1797 | 1.0861 | | No log | 1.9189 | 284 | 1.1432 | 0.3151 | 1.1432 | 1.0692 | | No log | 1.9324 | 286 | 1.1140 | 0.2844 | 1.1140 | 1.0555 | | No log | 1.9459 | 288 | 1.1242 | 0.3019 | 1.1242 | 1.0603 | | No log | 1.9595 | 290 | 1.1055 | 0.2918 | 1.1055 | 1.0514 | | No log | 1.9730 | 292 | 1.0680 | 0.3621 | 1.0680 | 1.0334 | | No log | 1.9865 | 294 | 1.0166 | 0.3595 | 1.0166 | 1.0083 | | No log | 2.0 | 296 | 1.0173 | 0.3700 | 1.0173 | 1.0086 | | No log | 2.0135 | 298 | 1.0393 | 0.3032 | 1.0393 | 1.0195 | | No log | 2.0270 | 300 | 1.0941 | 0.3493 | 1.0941 | 1.0460 | | No log | 2.0405 | 302 | 1.1969 | 0.3922 | 1.1969 | 1.0940 | | No log | 2.0541 | 304 | 1.2401 | 0.3543 | 1.2401 | 1.1136 | | No log | 2.0676 | 306 | 1.3081 | 0.3429 | 1.3081 | 1.1437 | | No log | 2.0811 | 308 | 1.3743 | 0.3615 | 1.3743 | 1.1723 | | No log | 2.0946 | 310 | 1.3323 | 0.2897 | 1.3323 | 1.1543 | | No log | 2.1081 | 312 | 1.2064 | 0.3370 | 1.2064 | 1.0984 | | No log | 2.1216 | 314 | 1.0800 | 0.3506 | 1.0800 | 1.0392 | | No log | 2.1351 | 316 | 1.0597 | 0.3471 | 1.0597 | 1.0294 | | No log | 2.1486 | 318 | 1.0793 | 0.3357 | 1.0793 | 1.0389 | | No log | 2.1622 | 320 | 1.0572 | 0.3435 | 1.0572 | 1.0282 | | No log | 2.1757 | 322 | 1.0677 | 0.3269 | 1.0677 | 1.0333 | | No log | 2.1892 | 324 | 1.0953 | 0.2992 | 1.0953 | 1.0466 | | No log | 2.2027 | 326 | 1.1269 | 0.3534 | 1.1269 | 1.0615 | | No log | 2.2162 | 328 | 1.1930 | 0.3122 | 1.1930 | 1.0922 | | No log | 2.2297 | 330 | 1.1986 | 0.2864 | 1.1986 | 1.0948 | | No log | 2.2432 | 332 | 1.1612 | 0.3531 | 1.1612 | 1.0776 | | No log | 2.2568 | 334 | 1.1851 | 0.3531 | 1.1851 | 1.0886 | | No log | 2.2703 | 336 | 1.1595 | 0.3926 | 1.1595 | 1.0768 | | No log | 2.2838 | 338 | 1.0949 | 0.4518 | 1.0949 | 1.0464 | | No log | 2.2973 | 340 | 1.0942 | 0.4429 | 1.0942 | 1.0460 | | No log | 2.3108 | 342 | 1.1467 | 0.3597 | 1.1467 | 1.0709 | | No log | 2.3243 | 344 | 1.2363 | 0.3576 | 1.2363 | 1.1119 | | No log | 2.3378 | 346 | 1.2502 | 0.3660 | 1.2502 | 1.1181 | | No log | 2.3514 | 348 | 1.2327 | 0.3660 | 1.2327 | 1.1103 | | No log | 2.3649 | 350 | 1.1786 | 0.3564 | 1.1786 | 1.0857 | | No log | 2.3784 | 352 | 1.1659 | 0.3564 | 1.1659 | 1.0798 | | No log | 2.3919 | 354 | 1.1835 | 0.3467 | 1.1835 | 1.0879 | | No log | 2.4054 | 356 | 1.2503 | 0.3345 | 1.2503 | 1.1182 | | No log | 2.4189 | 358 | 1.2609 | 0.3571 | 1.2609 | 1.1229 | | No log | 2.4324 | 360 | 1.2476 | 0.3571 | 1.2476 | 1.1170 | | No log | 2.4459 | 362 | 1.2914 | 0.3417 | 1.2914 | 1.1364 | | No log | 2.4595 | 364 | 1.2736 | 0.3625 | 1.2736 | 1.1285 | | No log | 2.4730 | 366 | 1.2778 | 0.3417 | 1.2778 | 1.1304 | | No log | 2.4865 | 368 | 1.2244 | 0.3564 | 1.2244 | 1.1065 | | No log | 2.5 | 370 | 1.1758 | 0.3432 | 1.1758 | 1.0844 | | No log | 2.5135 | 372 | 1.2044 | 0.3479 | 1.2044 | 1.0974 | | No log | 2.5270 | 374 | 1.2264 | 0.3417 | 1.2264 | 1.1074 | | No log | 2.5405 | 376 | 1.3028 | 0.3607 | 1.3028 | 1.1414 | | No log | 2.5541 | 378 | 1.4414 | 0.3697 | 1.4414 | 1.2006 | | No log | 2.5676 | 380 | 1.5747 | 0.2861 | 1.5747 | 1.2549 | | No log | 2.5811 | 382 | 1.6126 | 0.2963 | 1.6126 | 1.2699 | | No log | 2.5946 | 384 | 1.5615 | 0.3111 | 1.5615 | 1.2496 | | No log | 2.6081 | 386 | 1.4145 | 0.3479 | 1.4145 | 1.1893 | | No log | 2.6216 | 388 | 1.3398 | 0.3789 | 1.3398 | 1.1575 | | No log | 2.6351 | 390 | 1.3333 | 0.3873 | 1.3333 | 1.1547 | | No log | 2.6486 | 392 | 1.4190 | 0.3953 | 1.4190 | 1.1912 | | No log | 2.6622 | 394 | 1.4791 | 0.3593 | 1.4791 | 1.2162 | | No log | 2.6757 | 396 | 1.4448 | 0.3833 | 1.4448 | 1.2020 | | No log | 2.6892 | 398 | 1.3238 | 0.3826 | 1.3238 | 1.1506 | | No log | 2.7027 | 400 | 1.2311 | 0.4139 | 1.2311 | 1.1095 | | No log | 2.7162 | 402 | 1.1936 | 0.3951 | 1.1936 | 1.0925 | | No log | 2.7297 | 404 | 1.1775 | 0.3951 | 1.1775 | 1.0851 | | No log | 2.7432 | 406 | 1.2481 | 0.3625 | 1.2481 | 1.1172 | | No log | 2.7568 | 408 | 1.3219 | 0.3514 | 1.3219 | 1.1497 | | No log | 2.7703 | 410 | 1.3503 | 0.3556 | 1.3503 | 1.1620 | | No log | 2.7838 | 412 | 1.3182 | 0.3429 | 1.3182 | 1.1481 | | No log | 2.7973 | 414 | 1.2754 | 0.3334 | 1.2754 | 1.1293 | | No log | 2.8108 | 416 | 1.2420 | 0.3334 | 1.2420 | 1.1145 | | No log | 2.8243 | 418 | 1.1930 | 0.3322 | 1.1930 | 1.0923 | | No log | 2.8378 | 420 | 1.2303 | 0.3322 | 1.2303 | 1.1092 | | No log | 2.8514 | 422 | 1.3068 | 0.3625 | 1.3068 | 1.1432 | | No log | 2.8649 | 424 | 1.3904 | 0.3556 | 1.3904 | 1.1791 | | No log | 2.8784 | 426 | 1.4123 | 0.3584 | 1.4123 | 1.1884 | | No log | 2.8919 | 428 | 1.3715 | 0.3713 | 1.3715 | 1.1711 | | No log | 2.9054 | 430 | 1.3929 | 0.3482 | 1.3929 | 1.1802 | | No log | 2.9189 | 432 | 1.4558 | 0.3213 | 1.4558 | 1.2066 | | No log | 2.9324 | 434 | 1.4827 | 0.3420 | 1.4827 | 1.2177 | | No log | 2.9459 | 436 | 1.4131 | 0.3358 | 1.4131 | 1.1887 | | No log | 2.9595 | 438 | 1.2863 | 0.3941 | 1.2863 | 1.1342 | | No log | 2.9730 | 440 | 1.1834 | 0.3800 | 1.1834 | 1.0879 | | No log | 2.9865 | 442 | 1.2083 | 0.3998 | 1.2083 | 1.0992 | | No log | 3.0 | 444 | 1.3295 | 0.3436 | 1.3295 | 1.1530 | | No log | 3.0135 | 446 | 1.3818 | 0.3279 | 1.3818 | 1.1755 | | No log | 3.0270 | 448 | 1.4036 | 0.3644 | 1.4036 | 1.1847 | | No log | 3.0405 | 450 | 1.3613 | 0.3798 | 1.3613 | 1.1667 | | No log | 3.0541 | 452 | 1.4063 | 0.3644 | 1.4063 | 1.1859 | | No log | 3.0676 | 454 | 1.3811 | 0.3644 | 1.3811 | 1.1752 | | No log | 3.0811 | 456 | 1.4217 | 0.3325 | 1.4217 | 1.1924 | | No log | 3.0946 | 458 | 1.4501 | 0.3196 | 1.4501 | 1.2042 | | No log | 3.1081 | 460 | 1.3980 | 0.3459 | 1.3980 | 1.1824 | | No log | 3.1216 | 462 | 1.3266 | 0.3951 | 1.3266 | 1.1518 | | No log | 3.1351 | 464 | 1.2896 | 0.4277 | 1.2896 | 1.1356 | | No log | 3.1486 | 466 | 1.2852 | 0.4277 | 1.2852 | 1.1337 | | No log | 3.1622 | 468 | 1.2574 | 0.4277 | 1.2574 | 1.1214 | | No log | 3.1757 | 470 | 1.2341 | 0.3893 | 1.2341 | 1.1109 | | No log | 3.1892 | 472 | 1.1627 | 0.4045 | 1.1627 | 1.0783 | | No log | 3.2027 | 474 | 1.1079 | 0.4011 | 1.1079 | 1.0526 | | No log | 3.2162 | 476 | 1.0799 | 0.4202 | 1.0799 | 1.0392 | | No log | 3.2297 | 478 | 1.1416 | 0.4202 | 1.1416 | 1.0685 | | No log | 3.2432 | 480 | 1.2296 | 0.3906 | 1.2296 | 1.1089 | | No log | 3.2568 | 482 | 1.2582 | 0.3483 | 1.2582 | 1.1217 | | No log | 3.2703 | 484 | 1.2838 | 0.3436 | 1.2838 | 1.1331 | | No log | 3.2838 | 486 | 1.3429 | 0.2698 | 1.3429 | 1.1588 | | No log | 3.2973 | 488 | 1.3702 | 0.2907 | 1.3702 | 1.1705 | | No log | 3.3108 | 490 | 1.3093 | 0.3683 | 1.3093 | 1.1442 | | No log | 3.3243 | 492 | 1.2328 | 0.4159 | 1.2328 | 1.1103 | | No log | 3.3378 | 494 | 1.1756 | 0.4746 | 1.1756 | 1.0843 | | No log | 3.3514 | 496 | 1.1548 | 0.4764 | 1.1548 | 1.0746 | | No log | 3.3649 | 498 | 1.1868 | 0.4368 | 1.1868 | 1.0894 | | 0.4843 | 3.3784 | 500 | 1.2571 | 0.4222 | 1.2571 | 1.1212 | | 0.4843 | 3.3919 | 502 | 1.3959 | 0.3566 | 1.3959 | 1.1815 | | 0.4843 | 3.4054 | 504 | 1.4994 | 0.3307 | 1.4994 | 1.2245 | | 0.4843 | 3.4189 | 506 | 1.4898 | 0.3336 | 1.4898 | 1.2206 | | 0.4843 | 3.4324 | 508 | 1.3931 | 0.3164 | 1.3931 | 1.1803 | | 0.4843 | 3.4459 | 510 | 1.2753 | 0.3556 | 1.2753 | 1.1293 | | 0.4843 | 3.4595 | 512 | 1.1552 | 0.3556 | 1.1552 | 1.0748 | | 0.4843 | 3.4730 | 514 | 1.0829 | 0.4479 | 1.0829 | 1.0406 | | 0.4843 | 3.4865 | 516 | 1.0837 | 0.4665 | 1.0837 | 1.0410 | | 0.4843 | 3.5 | 518 | 1.1291 | 0.4397 | 1.1291 | 1.0626 | | 0.4843 | 3.5135 | 520 | 1.2411 | 0.4154 | 1.2411 | 1.1140 | | 0.4843 | 3.5270 | 522 | 1.3943 | 0.3957 | 1.3943 | 1.1808 | | 0.4843 | 3.5405 | 524 | 1.4900 | 0.3863 | 1.4900 | 1.2206 | | 0.4843 | 3.5541 | 526 | 1.5130 | 0.3708 | 1.5130 | 1.2300 | | 0.4843 | 3.5676 | 528 | 1.5632 | 0.3279 | 1.5632 | 1.2503 | | 0.4843 | 3.5811 | 530 | 1.5396 | 0.3237 | 1.5396 | 1.2408 | | 0.4843 | 3.5946 | 532 | 1.4458 | 0.3715 | 1.4458 | 1.2024 | | 0.4843 | 3.6081 | 534 | 1.2895 | 0.3526 | 1.2895 | 1.1356 | | 0.4843 | 3.6216 | 536 | 1.2228 | 0.3746 | 1.2228 | 1.1058 | | 0.4843 | 3.6351 | 538 | 1.2497 | 0.3851 | 1.2497 | 1.1179 | | 0.4843 | 3.6486 | 540 | 1.3156 | 0.3870 | 1.3156 | 1.1470 | | 0.4843 | 3.6622 | 542 | 1.4205 | 0.3892 | 1.4205 | 1.1919 | | 0.4843 | 3.6757 | 544 | 1.5389 | 0.3656 | 1.5389 | 1.2405 | | 0.4843 | 3.6892 | 546 | 1.5597 | 0.3744 | 1.5597 | 1.2489 | | 0.4843 | 3.7027 | 548 | 1.5219 | 0.3722 | 1.5219 | 1.2337 | | 0.4843 | 3.7162 | 550 | 1.4584 | 0.3785 | 1.4584 | 1.2076 | | 0.4843 | 3.7297 | 552 | 1.3586 | 0.3629 | 1.3586 | 1.1656 | | 0.4843 | 3.7432 | 554 | 1.2286 | 0.4052 | 1.2286 | 1.1084 | | 0.4843 | 3.7568 | 556 | 1.1552 | 0.4240 | 1.1552 | 1.0748 | | 0.4843 | 3.7703 | 558 | 1.1425 | 0.4240 | 1.1425 | 1.0689 | | 0.4843 | 3.7838 | 560 | 1.2391 | 0.3650 | 1.2391 | 1.1131 | | 0.4843 | 3.7973 | 562 | 1.3828 | 0.3681 | 1.3828 | 1.1759 | | 0.4843 | 3.8108 | 564 | 1.4735 | 0.3515 | 1.4735 | 1.2139 | | 0.4843 | 3.8243 | 566 | 1.5935 | 0.3565 | 1.5935 | 1.2623 | | 0.4843 | 3.8378 | 568 | 1.5478 | 0.3517 | 1.5478 | 1.2441 | | 0.4843 | 3.8514 | 570 | 1.5515 | 0.3715 | 1.5515 | 1.2456 | | 0.4843 | 3.8649 | 572 | 1.6183 | 0.3212 | 1.6183 | 1.2721 | | 0.4843 | 3.8784 | 574 | 1.6706 | 0.2997 | 1.6706 | 1.2925 | | 0.4843 | 3.8919 | 576 | 1.6204 | 0.3337 | 1.6204 | 1.2729 | | 0.4843 | 3.9054 | 578 | 1.5601 | 0.3420 | 1.5601 | 1.2490 | | 0.4843 | 3.9189 | 580 | 1.5052 | 0.3599 | 1.5052 | 1.2269 | | 0.4843 | 3.9324 | 582 | 1.5167 | 0.3883 | 1.5167 | 1.2315 | | 0.4843 | 3.9459 | 584 | 1.5539 | 0.3485 | 1.5539 | 1.2465 | | 0.4843 | 3.9595 | 586 | 1.6238 | 0.3231 | 1.6238 | 1.2743 | | 0.4843 | 3.9730 | 588 | 1.6686 | 0.2997 | 1.6686 | 1.2917 | | 0.4843 | 3.9865 | 590 | 1.6629 | 0.2829 | 1.6629 | 1.2895 | | 0.4843 | 4.0 | 592 | 1.6054 | 0.3411 | 1.6054 | 1.2671 | | 0.4843 | 4.0135 | 594 | 1.5376 | 0.3485 | 1.5376 | 1.2400 | | 0.4843 | 4.0270 | 596 | 1.4966 | 0.3459 | 1.4966 | 1.2234 | | 0.4843 | 4.0405 | 598 | 1.5522 | 0.3485 | 1.5522 | 1.2459 | | 0.4843 | 4.0541 | 600 | 1.7138 | 0.3320 | 1.7138 | 1.3091 | | 0.4843 | 4.0676 | 602 | 1.8086 | 0.2420 | 1.8086 | 1.3449 | | 0.4843 | 4.0811 | 604 | 1.7744 | 0.2194 | 1.7744 | 1.3321 | | 0.4843 | 4.0946 | 606 | 1.6856 | 0.2940 | 1.6856 | 1.2983 | | 0.4843 | 4.1081 | 608 | 1.5960 | 0.2795 | 1.5960 | 1.2633 | | 0.4843 | 4.1216 | 610 | 1.4769 | 0.3037 | 1.4769 | 1.2153 | | 0.4843 | 4.1351 | 612 | 1.4541 | 0.3213 | 1.4541 | 1.2058 | | 0.4843 | 4.1486 | 614 | 1.5378 | 0.2961 | 1.5378 | 1.2401 | | 0.4843 | 4.1622 | 616 | 1.6682 | 0.3027 | 1.6682 | 1.2916 | | 0.4843 | 4.1757 | 618 | 1.7467 | 0.3027 | 1.7467 | 1.3216 | | 0.4843 | 4.1892 | 620 | 1.7797 | 0.3027 | 1.7797 | 1.3340 | | 0.4843 | 4.2027 | 622 | 1.7893 | 0.2772 | 1.7893 | 1.3377 | | 0.4843 | 4.2162 | 624 | 1.8578 | 0.2871 | 1.8578 | 1.3630 | | 0.4843 | 4.2297 | 626 | 1.9391 | 0.2651 | 1.9391 | 1.3925 | | 0.4843 | 4.2432 | 628 | 1.9471 | 0.2437 | 1.9471 | 1.3954 | | 0.4843 | 4.2568 | 630 | 1.7806 | 0.3131 | 1.7806 | 1.3344 | | 0.4843 | 4.2703 | 632 | 1.5815 | 0.3411 | 1.5815 | 1.2576 | | 0.4843 | 4.2838 | 634 | 1.5468 | 0.3411 | 1.5468 | 1.2437 | | 0.4843 | 4.2973 | 636 | 1.5500 | 0.3272 | 1.5500 | 1.2450 | | 0.4843 | 4.3108 | 638 | 1.5949 | 0.3164 | 1.5949 | 1.2629 | | 0.4843 | 4.3243 | 640 | 1.7027 | 0.2894 | 1.7027 | 1.3049 | | 0.4843 | 4.3378 | 642 | 1.7132 | 0.2894 | 1.7132 | 1.3089 | | 0.4843 | 4.3514 | 644 | 1.6814 | 0.3131 | 1.6814 | 1.2967 | | 0.4843 | 4.3649 | 646 | 1.5899 | 0.3131 | 1.5899 | 1.2609 | | 0.4843 | 4.3784 | 648 | 1.5817 | 0.3131 | 1.5817 | 1.2576 | | 0.4843 | 4.3919 | 650 | 1.6293 | 0.3131 | 1.6293 | 1.2765 | | 0.4843 | 4.4054 | 652 | 1.5802 | 0.3131 | 1.5802 | 1.2571 | | 0.4843 | 4.4189 | 654 | 1.5614 | 0.3319 | 1.5614 | 1.2495 | | 0.4843 | 4.4324 | 656 | 1.5583 | 0.3319 | 1.5583 | 1.2483 | | 0.4843 | 4.4459 | 658 | 1.5909 | 0.3212 | 1.5909 | 1.2613 | | 0.4843 | 4.4595 | 660 | 1.5121 | 0.3164 | 1.5121 | 1.2297 | | 0.4843 | 4.4730 | 662 | 1.4226 | 0.3444 | 1.4226 | 1.1927 | | 0.4843 | 4.4865 | 664 | 1.3915 | 0.3444 | 1.3915 | 1.1796 | | 0.4843 | 4.5 | 666 | 1.3667 | 0.3444 | 1.3667 | 1.1691 | | 0.4843 | 4.5135 | 668 | 1.4298 | 0.3444 | 1.4298 | 1.1958 | | 0.4843 | 4.5270 | 670 | 1.5448 | 0.3328 | 1.5448 | 1.2429 | | 0.4843 | 4.5405 | 672 | 1.5714 | 0.3300 | 1.5714 | 1.2535 | | 0.4843 | 4.5541 | 674 | 1.5683 | 0.3300 | 1.5683 | 1.2523 | | 0.4843 | 4.5676 | 676 | 1.5963 | 0.3300 | 1.5963 | 1.2634 | | 0.4843 | 4.5811 | 678 | 1.6798 | 0.3293 | 1.6798 | 1.2961 | | 0.4843 | 4.5946 | 680 | 1.7009 | 0.2997 | 1.7009 | 1.3042 | | 0.4843 | 4.6081 | 682 | 1.6683 | 0.3365 | 1.6683 | 1.2916 | | 0.4843 | 4.6216 | 684 | 1.6730 | 0.3015 | 1.6730 | 1.2934 | | 0.4843 | 4.6351 | 686 | 1.6536 | 0.3015 | 1.6536 | 1.2859 | | 0.4843 | 4.6486 | 688 | 1.6162 | 0.3015 | 1.6162 | 1.2713 | | 0.4843 | 4.6622 | 690 | 1.5439 | 0.3156 | 1.5439 | 1.2425 | | 0.4843 | 4.6757 | 692 | 1.4690 | 0.3549 | 1.4690 | 1.2120 | | 0.4843 | 4.6892 | 694 | 1.4470 | 0.3468 | 1.4470 | 1.2029 | | 0.4843 | 4.7027 | 696 | 1.4947 | 0.3549 | 1.4947 | 1.2226 | | 0.4843 | 4.7162 | 698 | 1.5317 | 0.3269 | 1.5317 | 1.2376 | | 0.4843 | 4.7297 | 700 | 1.5962 | 0.3299 | 1.5962 | 1.2634 | | 0.4843 | 4.7432 | 702 | 1.5883 | 0.3269 | 1.5883 | 1.2603 | | 0.4843 | 4.7568 | 704 | 1.5742 | 0.3414 | 1.5742 | 1.2547 | | 0.4843 | 4.7703 | 706 | 1.5851 | 0.3239 | 1.5851 | 1.2590 | | 0.4843 | 4.7838 | 708 | 1.5319 | 0.3207 | 1.5319 | 1.2377 | | 0.4843 | 4.7973 | 710 | 1.4511 | 0.3207 | 1.4511 | 1.2046 | | 0.4843 | 4.8108 | 712 | 1.3951 | 0.3141 | 1.3951 | 1.1811 | | 0.4843 | 4.8243 | 714 | 1.3101 | 0.3567 | 1.3101 | 1.1446 | | 0.4843 | 4.8378 | 716 | 1.2368 | 0.3837 | 1.2368 | 1.1121 | | 0.4843 | 4.8514 | 718 | 1.2388 | 0.3837 | 1.2388 | 1.1130 | | 0.4843 | 4.8649 | 720 | 1.3204 | 0.3909 | 1.3204 | 1.1491 | | 0.4843 | 4.8784 | 722 | 1.4661 | 0.3496 | 1.4661 | 1.2108 | | 0.4843 | 4.8919 | 724 | 1.6460 | 0.3005 | 1.6460 | 1.2830 | | 0.4843 | 4.9054 | 726 | 1.7340 | 0.2799 | 1.7340 | 1.3168 | | 0.4843 | 4.9189 | 728 | 1.7026 | 0.2833 | 1.7026 | 1.3048 | | 0.4843 | 4.9324 | 730 | 1.7014 | 0.2973 | 1.7014 | 1.3044 | | 0.4843 | 4.9459 | 732 | 1.6884 | 0.2501 | 1.6884 | 1.2994 | | 0.4843 | 4.9595 | 734 | 1.6342 | 0.2906 | 1.6342 | 1.2784 | | 0.4843 | 4.9730 | 736 | 1.5849 | 0.3156 | 1.5849 | 1.2589 | | 0.4843 | 4.9865 | 738 | 1.5473 | 0.3187 | 1.5473 | 1.2439 | | 0.4843 | 5.0 | 740 | 1.4971 | 0.3411 | 1.4971 | 1.2236 | | 0.4843 | 5.0135 | 742 | 1.4199 | 0.3607 | 1.4199 | 1.1916 | | 0.4843 | 5.0270 | 744 | 1.4074 | 0.3729 | 1.4074 | 1.1863 | | 0.4843 | 5.0405 | 746 | 1.4263 | 0.3632 | 1.4263 | 1.1943 | | 0.4843 | 5.0541 | 748 | 1.4690 | 0.3641 | 1.4690 | 1.2120 | | 0.4843 | 5.0676 | 750 | 1.5846 | 0.3177 | 1.5846 | 1.2588 | | 0.4843 | 5.0811 | 752 | 1.6485 | 0.2940 | 1.6485 | 1.2839 | | 0.4843 | 5.0946 | 754 | 1.6776 | 0.2900 | 1.6776 | 1.2952 | | 0.4843 | 5.1081 | 756 | 1.6652 | 0.2866 | 1.6652 | 1.2904 | | 0.4843 | 5.1216 | 758 | 1.5799 | 0.3146 | 1.5799 | 1.2569 | | 0.4843 | 5.1351 | 760 | 1.5256 | 0.3346 | 1.5256 | 1.2351 | | 0.4843 | 5.1486 | 762 | 1.4214 | 0.3539 | 1.4214 | 1.1922 | | 0.4843 | 5.1622 | 764 | 1.3519 | 0.3655 | 1.3519 | 1.1627 | | 0.4843 | 5.1757 | 766 | 1.3078 | 0.3482 | 1.3078 | 1.1436 | | 0.4843 | 5.1892 | 768 | 1.3032 | 0.3612 | 1.3032 | 1.1416 | | 0.4843 | 5.2027 | 770 | 1.3558 | 0.3285 | 1.3558 | 1.1644 | | 0.4843 | 5.2162 | 772 | 1.4487 | 0.3376 | 1.4487 | 1.2036 | | 0.4843 | 5.2297 | 774 | 1.5575 | 0.3192 | 1.5575 | 1.2480 | | 0.4843 | 5.2432 | 776 | 1.6707 | 0.3037 | 1.6707 | 1.2926 | | 0.4843 | 5.2568 | 778 | 1.6930 | 0.2963 | 1.6930 | 1.3011 | | 0.4843 | 5.2703 | 780 | 1.7440 | 0.2403 | 1.7440 | 1.3206 | | 0.4843 | 5.2838 | 782 | 1.7784 | 0.2403 | 1.7784 | 1.3336 | | 0.4843 | 5.2973 | 784 | 1.7785 | 0.2403 | 1.7785 | 1.3336 | | 0.4843 | 5.3108 | 786 | 1.7581 | 0.2403 | 1.7581 | 1.3259 | | 0.4843 | 5.3243 | 788 | 1.7099 | 0.2963 | 1.7099 | 1.3076 | | 0.4843 | 5.3378 | 790 | 1.6715 | 0.3037 | 1.6715 | 1.2929 | | 0.4843 | 5.3514 | 792 | 1.5627 | 0.3282 | 1.5627 | 1.2501 | | 0.4843 | 5.3649 | 794 | 1.4563 | 0.3356 | 1.4563 | 1.2068 | | 0.4843 | 5.3784 | 796 | 1.3872 | 0.3493 | 1.3872 | 1.1778 | | 0.4843 | 5.3919 | 798 | 1.3574 | 0.3493 | 1.3574 | 1.1651 | | 0.4843 | 5.4054 | 800 | 1.3943 | 0.3772 | 1.3943 | 1.1808 | | 0.4843 | 5.4189 | 802 | 1.4231 | 0.3472 | 1.4231 | 1.1929 | | 0.4843 | 5.4324 | 804 | 1.4797 | 0.3253 | 1.4797 | 1.2164 | | 0.4843 | 5.4459 | 806 | 1.5834 | 0.3282 | 1.5834 | 1.2583 | | 0.4843 | 5.4595 | 808 | 1.6717 | 0.2973 | 1.6717 | 1.2929 | | 0.4843 | 5.4730 | 810 | 1.6744 | 0.3037 | 1.6744 | 1.2940 | | 0.4843 | 5.4865 | 812 | 1.6834 | 0.2973 | 1.6834 | 1.2974 | | 0.4843 | 5.5 | 814 | 1.6109 | 0.2940 | 1.6109 | 1.2692 | | 0.4843 | 5.5135 | 816 | 1.5070 | 0.3187 | 1.5070 | 1.2276 | | 0.4843 | 5.5270 | 818 | 1.4584 | 0.3187 | 1.4584 | 1.2076 | | 0.4843 | 5.5405 | 820 | 1.3793 | 0.3549 | 1.3793 | 1.1744 | | 0.4843 | 5.5541 | 822 | 1.3356 | 0.3620 | 1.3356 | 1.1557 | | 0.4843 | 5.5676 | 824 | 1.3189 | 0.3539 | 1.3189 | 1.1484 | | 0.4843 | 5.5811 | 826 | 1.3105 | 0.3539 | 1.3105 | 1.1448 | | 0.4843 | 5.5946 | 828 | 1.3576 | 0.3715 | 1.3576 | 1.1652 | | 0.4843 | 5.6081 | 830 | 1.4240 | 0.3383 | 1.4240 | 1.1933 | | 0.4843 | 5.6216 | 832 | 1.5137 | 0.3005 | 1.5137 | 1.2303 | | 0.4843 | 5.6351 | 834 | 1.5774 | 0.3005 | 1.5774 | 1.2560 | | 0.4843 | 5.6486 | 836 | 1.5546 | 0.3047 | 1.5546 | 1.2468 | | 0.4843 | 5.6622 | 838 | 1.4934 | 0.3454 | 1.4934 | 1.2220 | | 0.4843 | 5.6757 | 840 | 1.4858 | 0.3289 | 1.4858 | 1.2189 | | 0.4843 | 5.6892 | 842 | 1.4804 | 0.3383 | 1.4804 | 1.2167 | | 0.4843 | 5.7027 | 844 | 1.5103 | 0.3133 | 1.5103 | 1.2289 | | 0.4843 | 5.7162 | 846 | 1.5280 | 0.3133 | 1.5280 | 1.2361 | | 0.4843 | 5.7297 | 848 | 1.5636 | 0.3037 | 1.5636 | 1.2504 | | 0.4843 | 5.7432 | 850 | 1.5866 | 0.2940 | 1.5866 | 1.2596 | | 0.4843 | 5.7568 | 852 | 1.5970 | 0.2940 | 1.5970 | 1.2637 | | 0.4843 | 5.7703 | 854 | 1.5841 | 0.2940 | 1.5841 | 1.2586 | | 0.4843 | 5.7838 | 856 | 1.5314 | 0.3156 | 1.5314 | 1.2375 | | 0.4843 | 5.7973 | 858 | 1.4913 | 0.3386 | 1.4913 | 1.2212 | | 0.4843 | 5.8108 | 860 | 1.4988 | 0.3259 | 1.4988 | 1.2242 | | 0.4843 | 5.8243 | 862 | 1.5453 | 0.2963 | 1.5453 | 1.2431 | | 0.4843 | 5.8378 | 864 | 1.6240 | 0.2963 | 1.6240 | 1.2744 | | 0.4843 | 5.8514 | 866 | 1.6744 | 0.2963 | 1.6744 | 1.2940 | | 0.4843 | 5.8649 | 868 | 1.6201 | 0.2963 | 1.6201 | 1.2728 | | 0.4843 | 5.8784 | 870 | 1.5831 | 0.2896 | 1.5831 | 1.2582 | | 0.4843 | 5.8919 | 872 | 1.6047 | 0.2861 | 1.6047 | 1.2668 | | 0.4843 | 5.9054 | 874 | 1.5929 | 0.3177 | 1.5929 | 1.2621 | | 0.4843 | 5.9189 | 876 | 1.5566 | 0.3146 | 1.5566 | 1.2476 | | 0.4843 | 5.9324 | 878 | 1.5381 | 0.3114 | 1.5381 | 1.2402 | | 0.4843 | 5.9459 | 880 | 1.5121 | 0.3376 | 1.5121 | 1.2297 | | 0.4843 | 5.9595 | 882 | 1.4493 | 0.3593 | 1.4493 | 1.2039 | | 0.4843 | 5.9730 | 884 | 1.3842 | 0.3219 | 1.3842 | 1.1765 | | 0.4843 | 5.9865 | 886 | 1.3791 | 0.3402 | 1.3791 | 1.1743 | | 0.4843 | 6.0 | 888 | 1.4220 | 0.3271 | 1.4220 | 1.1925 | | 0.4843 | 6.0135 | 890 | 1.4715 | 0.3549 | 1.4715 | 1.2130 | | 0.4843 | 6.0270 | 892 | 1.5570 | 0.3112 | 1.5570 | 1.2478 | | 0.4843 | 6.0405 | 894 | 1.6038 | 0.2726 | 1.6038 | 1.2664 | | 0.4843 | 6.0541 | 896 | 1.6519 | 0.2726 | 1.6519 | 1.2852 | | 0.4843 | 6.0676 | 898 | 1.6918 | 0.2629 | 1.6918 | 1.3007 | | 0.4843 | 6.0811 | 900 | 1.6915 | 0.2629 | 1.6915 | 1.3006 | | 0.4843 | 6.0946 | 902 | 1.6379 | 0.2726 | 1.6379 | 1.2798 | | 0.4843 | 6.1081 | 904 | 1.6312 | 0.2726 | 1.6312 | 1.2772 | | 0.4843 | 6.1216 | 906 | 1.6861 | 0.2726 | 1.6861 | 1.2985 | | 0.4843 | 6.1351 | 908 | 1.7236 | 0.2833 | 1.7236 | 1.3129 | | 0.4843 | 6.1486 | 910 | 1.7048 | 0.2772 | 1.7048 | 1.3057 | | 0.4843 | 6.1622 | 912 | 1.6590 | 0.2799 | 1.6590 | 1.2880 | | 0.4843 | 6.1757 | 914 | 1.5906 | 0.3037 | 1.5906 | 1.2612 | | 0.4843 | 6.1892 | 916 | 1.5579 | 0.3328 | 1.5579 | 1.2482 | | 0.4843 | 6.2027 | 918 | 1.5400 | 0.3328 | 1.5400 | 1.2410 | | 0.4843 | 6.2162 | 920 | 1.5251 | 0.3328 | 1.5251 | 1.2350 | | 0.4843 | 6.2297 | 922 | 1.5642 | 0.3356 | 1.5642 | 1.2507 | | 0.4843 | 6.2432 | 924 | 1.5742 | 0.3246 | 1.5742 | 1.2547 | | 0.4843 | 6.2568 | 926 | 1.6366 | 0.3006 | 1.6366 | 1.2793 | | 0.4843 | 6.2703 | 928 | 1.7009 | 0.3037 | 1.7009 | 1.3042 | | 0.4843 | 6.2838 | 930 | 1.6848 | 0.3037 | 1.6848 | 1.2980 | | 0.4843 | 6.2973 | 932 | 1.6060 | 0.2973 | 1.6060 | 1.2673 | | 0.4843 | 6.3108 | 934 | 1.5118 | 0.3239 | 1.5118 | 1.2295 | | 0.4843 | 6.3243 | 936 | 1.4488 | 0.3277 | 1.4488 | 1.2037 | | 0.4843 | 6.3378 | 938 | 1.4137 | 0.3376 | 1.4137 | 1.1890 | | 0.4843 | 6.3514 | 940 | 1.4087 | 0.3468 | 1.4087 | 1.1869 | | 0.4843 | 6.3649 | 942 | 1.4317 | 0.3404 | 1.4317 | 1.1965 | | 0.4843 | 6.3784 | 944 | 1.4348 | 0.3376 | 1.4348 | 1.1978 | | 0.4843 | 6.3919 | 946 | 1.4788 | 0.3289 | 1.4788 | 1.2161 | | 0.4843 | 6.4054 | 948 | 1.5610 | 0.3177 | 1.5610 | 1.2494 | | 0.4843 | 6.4189 | 950 | 1.6743 | 0.2940 | 1.6743 | 1.2940 | | 0.4843 | 6.4324 | 952 | 1.7860 | 0.2484 | 1.7860 | 1.3364 | | 0.4843 | 6.4459 | 954 | 1.8452 | 0.2430 | 1.8452 | 1.3584 | | 0.4843 | 6.4595 | 956 | 1.9013 | 0.2216 | 1.9013 | 1.3789 | | 0.4843 | 6.4730 | 958 | 1.9291 | 0.2216 | 1.9291 | 1.3889 | | 0.4843 | 6.4865 | 960 | 1.9040 | 0.2216 | 1.9040 | 1.3798 | | 0.4843 | 6.5 | 962 | 1.8475 | 0.2222 | 1.8475 | 1.3592 | | 0.4843 | 6.5135 | 964 | 1.7439 | 0.2799 | 1.7439 | 1.3206 | | 0.4843 | 6.5270 | 966 | 1.6652 | 0.2799 | 1.6652 | 1.2904 | | 0.4843 | 6.5405 | 968 | 1.6421 | 0.2799 | 1.6421 | 1.2814 | | 0.4843 | 6.5541 | 970 | 1.6246 | 0.2799 | 1.6246 | 1.2746 | | 0.4843 | 6.5676 | 972 | 1.6389 | 0.2799 | 1.6389 | 1.2802 | | 0.4843 | 6.5811 | 974 | 1.6485 | 0.2799 | 1.6485 | 1.2840 | | 0.4843 | 6.5946 | 976 | 1.6590 | 0.2799 | 1.6590 | 1.2880 | | 0.4843 | 6.6081 | 978 | 1.6241 | 0.2900 | 1.6241 | 1.2744 | | 0.4843 | 6.6216 | 980 | 1.5802 | 0.3253 | 1.5802 | 1.2571 | | 0.4843 | 6.6351 | 982 | 1.5149 | 0.3337 | 1.5149 | 1.2308 | | 0.4843 | 6.6486 | 984 | 1.4915 | 0.3533 | 1.4915 | 1.2213 | | 0.4843 | 6.6622 | 986 | 1.4820 | 0.3559 | 1.4820 | 1.2174 | | 0.4843 | 6.6757 | 988 | 1.4935 | 0.3533 | 1.4935 | 1.2221 | | 0.4843 | 6.6892 | 990 | 1.4911 | 0.3299 | 1.4911 | 1.2211 | | 0.4843 | 6.7027 | 992 | 1.4787 | 0.3386 | 1.4787 | 1.2160 | | 0.4843 | 6.7162 | 994 | 1.4632 | 0.3454 | 1.4632 | 1.2096 | | 0.4843 | 6.7297 | 996 | 1.4362 | 0.3601 | 1.4362 | 1.1984 | | 0.4843 | 6.7432 | 998 | 1.4303 | 0.3523 | 1.4303 | 1.1960 | | 0.0777 | 6.7568 | 1000 | 1.4511 | 0.3617 | 1.4511 | 1.2046 | | 0.0777 | 6.7703 | 1002 | 1.4840 | 0.3243 | 1.4840 | 1.2182 | | 0.0777 | 6.7838 | 1004 | 1.5291 | 0.3037 | 1.5291 | 1.2366 | | 0.0777 | 6.7973 | 1006 | 1.5929 | 0.2690 | 1.5929 | 1.2621 | | 0.0777 | 6.8108 | 1008 | 1.6170 | 0.2764 | 1.6170 | 1.2716 | | 0.0777 | 6.8243 | 1010 | 1.6099 | 0.2764 | 1.6099 | 1.2688 | | 0.0777 | 6.8378 | 1012 | 1.6387 | 0.2764 | 1.6387 | 1.2801 | | 0.0777 | 6.8514 | 1014 | 1.6888 | 0.2664 | 1.6888 | 1.2995 | | 0.0777 | 6.8649 | 1016 | 1.7151 | 0.2602 | 1.7151 | 1.3096 | | 0.0777 | 6.8784 | 1018 | 1.7040 | 0.2602 | 1.7040 | 1.3054 | | 0.0777 | 6.8919 | 1020 | 1.6667 | 0.2664 | 1.6667 | 1.2910 | | 0.0777 | 6.9054 | 1022 | 1.6046 | 0.2866 | 1.6046 | 1.2667 | | 0.0777 | 6.9189 | 1024 | 1.5979 | 0.2866 | 1.5979 | 1.2641 | | 0.0777 | 6.9324 | 1026 | 1.6351 | 0.3143 | 1.6351 | 1.2787 | | 0.0777 | 6.9459 | 1028 | 1.7144 | 0.2799 | 1.7144 | 1.3093 | | 0.0777 | 6.9595 | 1030 | 1.7576 | 0.2382 | 1.7576 | 1.3257 | | 0.0777 | 6.9730 | 1032 | 1.8155 | 0.2382 | 1.8155 | 1.3474 | | 0.0777 | 6.9865 | 1034 | 1.8856 | 0.2362 | 1.8856 | 1.3732 | | 0.0777 | 7.0 | 1036 | 1.9319 | 0.2008 | 1.9319 | 1.3899 | | 0.0777 | 7.0135 | 1038 | 1.9445 | 0.2090 | 1.9445 | 1.3944 | | 0.0777 | 7.0270 | 1040 | 1.9178 | 0.2216 | 1.9178 | 1.3849 | | 0.0777 | 7.0405 | 1042 | 1.8738 | 0.2176 | 1.8738 | 1.3689 | | 0.0777 | 7.0541 | 1044 | 1.8195 | 0.2362 | 1.8195 | 1.3489 | | 0.0777 | 7.0676 | 1046 | 1.7677 | 0.2735 | 1.7677 | 1.3295 | | 0.0777 | 7.0811 | 1048 | 1.6784 | 0.2726 | 1.6784 | 1.2955 | | 0.0777 | 7.0946 | 1050 | 1.5966 | 0.3272 | 1.5966 | 1.2636 | | 0.0777 | 7.1081 | 1052 | 1.5575 | 0.3243 | 1.5575 | 1.2480 | | 0.0777 | 7.1216 | 1054 | 1.5385 | 0.3243 | 1.5385 | 1.2404 | | 0.0777 | 7.1351 | 1056 | 1.5489 | 0.3498 | 1.5489 | 1.2445 | | 0.0777 | 7.1486 | 1058 | 1.5868 | 0.3207 | 1.5868 | 1.2597 | | 0.0777 | 7.1622 | 1060 | 1.6480 | 0.3006 | 1.6480 | 1.2837 | | 0.0777 | 7.1757 | 1062 | 1.7124 | 0.2710 | 1.7124 | 1.3086 | | 0.0777 | 7.1892 | 1064 | 1.8130 | 0.2745 | 1.8130 | 1.3465 | | 0.0777 | 7.2027 | 1066 | 1.9138 | 0.2593 | 1.9138 | 1.3834 | | 0.0777 | 7.2162 | 1068 | 1.9545 | 0.2380 | 1.9545 | 1.3981 | | 0.0777 | 7.2297 | 1070 | 1.9453 | 0.2380 | 1.9453 | 1.3947 | | 0.0777 | 7.2432 | 1072 | 1.9030 | 0.2521 | 1.9030 | 1.3795 | | 0.0777 | 7.2568 | 1074 | 1.8245 | 0.2710 | 1.8245 | 1.3507 | | 0.0777 | 7.2703 | 1076 | 1.7610 | 0.2966 | 1.7610 | 1.3270 | | 0.0777 | 7.2838 | 1078 | 1.6751 | 0.3089 | 1.6751 | 1.2943 | | 0.0777 | 7.2973 | 1080 | 1.6051 | 0.3193 | 1.6051 | 1.2669 | | 0.0777 | 7.3108 | 1082 | 1.5509 | 0.3459 | 1.5509 | 1.2454 | | 0.0777 | 7.3243 | 1084 | 1.5108 | 0.3432 | 1.5108 | 1.2292 | | 0.0777 | 7.3378 | 1086 | 1.5080 | 0.3432 | 1.5080 | 1.2280 | | 0.0777 | 7.3514 | 1088 | 1.5193 | 0.3156 | 1.5193 | 1.2326 | | 0.0777 | 7.3649 | 1090 | 1.5547 | 0.3156 | 1.5547 | 1.2469 | | 0.0777 | 7.3784 | 1092 | 1.6201 | 0.2774 | 1.6201 | 1.2728 | | 0.0777 | 7.3919 | 1094 | 1.7013 | 0.2710 | 1.7013 | 1.3043 | | 0.0777 | 7.4054 | 1096 | 1.7729 | 0.2710 | 1.7729 | 1.3315 | | 0.0777 | 7.4189 | 1098 | 1.8383 | 0.2710 | 1.8383 | 1.3558 | | 0.0777 | 7.4324 | 1100 | 1.8437 | 0.2484 | 1.8437 | 1.3578 | | 0.0777 | 7.4459 | 1102 | 1.8295 | 0.2484 | 1.8295 | 1.3526 | | 0.0777 | 7.4595 | 1104 | 1.7846 | 0.2544 | 1.7846 | 1.3359 | | 0.0777 | 7.4730 | 1106 | 1.7179 | 0.2799 | 1.7179 | 1.3107 | | 0.0777 | 7.4865 | 1108 | 1.6302 | 0.3272 | 1.6302 | 1.2768 | | 0.0777 | 7.5 | 1110 | 1.5776 | 0.3243 | 1.5776 | 1.2560 | | 0.0777 | 7.5135 | 1112 | 1.5591 | 0.3356 | 1.5591 | 1.2486 | | 0.0777 | 7.5270 | 1114 | 1.5447 | 0.3617 | 1.5447 | 1.2429 | | 0.0777 | 7.5405 | 1116 | 1.5693 | 0.3498 | 1.5693 | 1.2527 | | 0.0777 | 7.5541 | 1118 | 1.5948 | 0.3498 | 1.5948 | 1.2628 | | 0.0777 | 7.5676 | 1120 | 1.5984 | 0.3498 | 1.5984 | 1.2643 | | 0.0777 | 7.5811 | 1122 | 1.5721 | 0.3498 | 1.5721 | 1.2538 | | 0.0777 | 7.5946 | 1124 | 1.5697 | 0.3498 | 1.5697 | 1.2529 | | 0.0777 | 7.6081 | 1126 | 1.5684 | 0.3498 | 1.5684 | 1.2523 | | 0.0777 | 7.6216 | 1128 | 1.5825 | 0.3498 | 1.5825 | 1.2580 | | 0.0777 | 7.6351 | 1130 | 1.5954 | 0.3411 | 1.5954 | 1.2631 | | 0.0777 | 7.6486 | 1132 | 1.5957 | 0.3524 | 1.5957 | 1.2632 | | 0.0777 | 7.6622 | 1134 | 1.6137 | 0.3164 | 1.6137 | 1.2703 | | 0.0777 | 7.6757 | 1136 | 1.6252 | 0.2924 | 1.6252 | 1.2748 | | 0.0777 | 7.6892 | 1138 | 1.6263 | 0.3027 | 1.6263 | 1.2753 | | 0.0777 | 7.7027 | 1140 | 1.6395 | 0.2823 | 1.6395 | 1.2804 | | 0.0777 | 7.7162 | 1142 | 1.6635 | 0.2823 | 1.6635 | 1.2898 | | 0.0777 | 7.7297 | 1144 | 1.6983 | 0.2894 | 1.6983 | 1.3032 | | 0.0777 | 7.7432 | 1146 | 1.7530 | 0.2894 | 1.7530 | 1.3240 | | 0.0777 | 7.7568 | 1148 | 1.7947 | 0.2796 | 1.7947 | 1.3397 | | 0.0777 | 7.7703 | 1150 | 1.8376 | 0.2468 | 1.8376 | 1.3556 | | 0.0777 | 7.7838 | 1152 | 1.8441 | 0.2468 | 1.8441 | 1.3580 | | 0.0777 | 7.7973 | 1154 | 1.8660 | 0.2468 | 1.8660 | 1.3660 | | 0.0777 | 7.8108 | 1156 | 1.8648 | 0.2709 | 1.8648 | 1.3656 | | 0.0777 | 7.8243 | 1158 | 1.8199 | 0.2675 | 1.8199 | 1.3490 | | 0.0777 | 7.8378 | 1160 | 1.7437 | 0.3058 | 1.7437 | 1.3205 | | 0.0777 | 7.8514 | 1162 | 1.6535 | 0.3265 | 1.6535 | 1.2859 | | 0.0777 | 7.8649 | 1164 | 1.5965 | 0.3207 | 1.5965 | 1.2635 | | 0.0777 | 7.8784 | 1166 | 1.5466 | 0.3207 | 1.5466 | 1.2436 | | 0.0777 | 7.8919 | 1168 | 1.4866 | 0.3762 | 1.4866 | 1.2193 | | 0.0777 | 7.9054 | 1170 | 1.4509 | 0.3762 | 1.4509 | 1.2045 | | 0.0777 | 7.9189 | 1172 | 1.4327 | 0.3762 | 1.4327 | 1.1969 | | 0.0777 | 7.9324 | 1174 | 1.4440 | 0.3685 | 1.4440 | 1.2016 | | 0.0777 | 7.9459 | 1176 | 1.4679 | 0.3685 | 1.4679 | 1.2116 | | 0.0777 | 7.9595 | 1178 | 1.5010 | 0.3272 | 1.5010 | 1.2251 | | 0.0777 | 7.9730 | 1180 | 1.5512 | 0.3237 | 1.5512 | 1.2455 | | 0.0777 | 7.9865 | 1182 | 1.6061 | 0.3237 | 1.6061 | 1.2673 | | 0.0777 | 8.0 | 1184 | 1.6234 | 0.3131 | 1.6234 | 1.2741 | | 0.0777 | 8.0135 | 1186 | 1.6305 | 0.3131 | 1.6305 | 1.2769 | | 0.0777 | 8.0270 | 1188 | 1.6388 | 0.3131 | 1.6388 | 1.2802 | | 0.0777 | 8.0405 | 1190 | 1.6614 | 0.3131 | 1.6614 | 1.2889 | | 0.0777 | 8.0541 | 1192 | 1.6788 | 0.3131 | 1.6788 | 1.2957 | | 0.0777 | 8.0676 | 1194 | 1.6907 | 0.3058 | 1.6907 | 1.3003 | | 0.0777 | 8.0811 | 1196 | 1.6982 | 0.2823 | 1.6982 | 1.3032 | | 0.0777 | 8.0946 | 1198 | 1.7181 | 0.2823 | 1.7181 | 1.3108 | | 0.0777 | 8.1081 | 1200 | 1.7528 | 0.2894 | 1.7528 | 1.3239 | | 0.0777 | 8.1216 | 1202 | 1.7677 | 0.2894 | 1.7677 | 1.3295 | | 0.0777 | 8.1351 | 1204 | 1.7686 | 0.2700 | 1.7686 | 1.3299 | | 0.0777 | 8.1486 | 1206 | 1.7978 | 0.2700 | 1.7978 | 1.3408 | | 0.0777 | 8.1622 | 1208 | 1.8424 | 0.2581 | 1.8424 | 1.3574 | | 0.0777 | 8.1757 | 1210 | 1.8838 | 0.2616 | 1.8838 | 1.3725 | | 0.0777 | 8.1892 | 1212 | 1.8943 | 0.2430 | 1.8943 | 1.3763 | | 0.0777 | 8.2027 | 1214 | 1.9187 | 0.2468 | 1.9187 | 1.3852 | | 0.0777 | 8.2162 | 1216 | 1.9352 | 0.2468 | 1.9352 | 1.3911 | | 0.0777 | 8.2297 | 1218 | 1.9355 | 0.2468 | 1.9355 | 1.3912 | | 0.0777 | 8.2432 | 1220 | 1.9105 | 0.2616 | 1.9105 | 1.3822 | | 0.0777 | 8.2568 | 1222 | 1.8654 | 0.2675 | 1.8654 | 1.3658 | | 0.0777 | 8.2703 | 1224 | 1.8118 | 0.2866 | 1.8118 | 1.3460 | | 0.0777 | 8.2838 | 1226 | 1.7500 | 0.2933 | 1.7500 | 1.3229 | | 0.0777 | 8.2973 | 1228 | 1.6927 | 0.3058 | 1.6927 | 1.3010 | | 0.0777 | 8.3108 | 1230 | 1.6671 | 0.3058 | 1.6671 | 1.2912 | | 0.0777 | 8.3243 | 1232 | 1.6634 | 0.3164 | 1.6634 | 1.2897 | | 0.0777 | 8.3378 | 1234 | 1.6884 | 0.3058 | 1.6884 | 1.2994 | | 0.0777 | 8.3514 | 1236 | 1.7327 | 0.3058 | 1.7327 | 1.3163 | | 0.0777 | 8.3649 | 1238 | 1.7879 | 0.2833 | 1.7879 | 1.3371 | | 0.0777 | 8.3784 | 1240 | 1.8406 | 0.2866 | 1.8406 | 1.3567 | | 0.0777 | 8.3919 | 1242 | 1.8657 | 0.2866 | 1.8657 | 1.3659 | | 0.0777 | 8.4054 | 1244 | 1.8778 | 0.2675 | 1.8778 | 1.3703 | | 0.0777 | 8.4189 | 1246 | 1.9012 | 0.2675 | 1.9012 | 1.3789 | | 0.0777 | 8.4324 | 1248 | 1.9175 | 0.2616 | 1.9175 | 1.3847 | | 0.0777 | 8.4459 | 1250 | 1.9259 | 0.2616 | 1.9259 | 1.3878 | | 0.0777 | 8.4595 | 1252 | 1.9264 | 0.2616 | 1.9264 | 1.3880 | | 0.0777 | 8.4730 | 1254 | 1.9345 | 0.2616 | 1.9345 | 1.3909 | | 0.0777 | 8.4865 | 1256 | 1.9404 | 0.2616 | 1.9404 | 1.3930 | | 0.0777 | 8.5 | 1258 | 1.9366 | 0.2616 | 1.9366 | 1.3916 | | 0.0777 | 8.5135 | 1260 | 1.9206 | 0.2616 | 1.9206 | 1.3859 | | 0.0777 | 8.5270 | 1262 | 1.9139 | 0.2616 | 1.9139 | 1.3834 | | 0.0777 | 8.5405 | 1264 | 1.8961 | 0.2581 | 1.8961 | 1.3770 | | 0.0777 | 8.5541 | 1266 | 1.8612 | 0.3058 | 1.8612 | 1.3643 | | 0.0777 | 8.5676 | 1268 | 1.8181 | 0.3027 | 1.8181 | 1.3484 | | 0.0777 | 8.5811 | 1270 | 1.7694 | 0.3131 | 1.7694 | 1.3302 | | 0.0777 | 8.5946 | 1272 | 1.7476 | 0.3237 | 1.7476 | 1.3220 | | 0.0777 | 8.6081 | 1274 | 1.7399 | 0.3173 | 1.7399 | 1.3191 | | 0.0777 | 8.6216 | 1276 | 1.7266 | 0.3173 | 1.7266 | 1.3140 | | 0.0777 | 8.6351 | 1278 | 1.7122 | 0.3173 | 1.7122 | 1.3085 | | 0.0777 | 8.6486 | 1280 | 1.7034 | 0.3173 | 1.7034 | 1.3051 | | 0.0777 | 8.6622 | 1282 | 1.7010 | 0.3173 | 1.7010 | 1.3042 | | 0.0777 | 8.6757 | 1284 | 1.7244 | 0.3173 | 1.7244 | 1.3132 | | 0.0777 | 8.6892 | 1286 | 1.7507 | 0.3203 | 1.7507 | 1.3231 | | 0.0777 | 8.7027 | 1288 | 1.7984 | 0.3098 | 1.7984 | 1.3410 | | 0.0777 | 8.7162 | 1290 | 1.8370 | 0.3028 | 1.8370 | 1.3554 | | 0.0777 | 8.7297 | 1292 | 1.8738 | 0.3028 | 1.8738 | 1.3689 | | 0.0777 | 8.7432 | 1294 | 1.8844 | 0.3028 | 1.8844 | 1.3727 | | 0.0777 | 8.7568 | 1296 | 1.8875 | 0.2650 | 1.8875 | 1.3739 | | 0.0777 | 8.7703 | 1298 | 1.8820 | 0.2650 | 1.8820 | 1.3719 | | 0.0777 | 8.7838 | 1300 | 1.8807 | 0.2650 | 1.8807 | 1.3714 | | 0.0777 | 8.7973 | 1302 | 1.8699 | 0.2745 | 1.8699 | 1.3674 | | 0.0777 | 8.8108 | 1304 | 1.8532 | 0.2745 | 1.8532 | 1.3613 | | 0.0777 | 8.8243 | 1306 | 1.8247 | 0.2936 | 1.8247 | 1.3508 | | 0.0777 | 8.8378 | 1308 | 1.7951 | 0.3128 | 1.7951 | 1.3398 | | 0.0777 | 8.8514 | 1310 | 1.7858 | 0.3128 | 1.7858 | 1.3363 | | 0.0777 | 8.8649 | 1312 | 1.7912 | 0.3128 | 1.7912 | 1.3384 | | 0.0777 | 8.8784 | 1314 | 1.7793 | 0.3128 | 1.7793 | 1.3339 | | 0.0777 | 8.8919 | 1316 | 1.7646 | 0.3128 | 1.7646 | 1.3284 | | 0.0777 | 8.9054 | 1318 | 1.7547 | 0.3098 | 1.7547 | 1.3247 | | 0.0777 | 8.9189 | 1320 | 1.7550 | 0.3098 | 1.7550 | 1.3248 | | 0.0777 | 8.9324 | 1322 | 1.7456 | 0.3098 | 1.7456 | 1.3212 | | 0.0777 | 8.9459 | 1324 | 1.7257 | 0.3237 | 1.7257 | 1.3137 | | 0.0777 | 8.9595 | 1326 | 1.7015 | 0.3237 | 1.7015 | 1.3044 | | 0.0777 | 8.9730 | 1328 | 1.6667 | 0.3237 | 1.6667 | 1.2910 | | 0.0777 | 8.9865 | 1330 | 1.6413 | 0.3237 | 1.6413 | 1.2811 | | 0.0777 | 9.0 | 1332 | 1.6272 | 0.3237 | 1.6272 | 1.2756 | | 0.0777 | 9.0135 | 1334 | 1.6127 | 0.3237 | 1.6127 | 1.2699 | | 0.0777 | 9.0270 | 1336 | 1.6078 | 0.3237 | 1.6078 | 1.2680 | | 0.0777 | 9.0405 | 1338 | 1.6143 | 0.3237 | 1.6143 | 1.2705 | | 0.0777 | 9.0541 | 1340 | 1.6312 | 0.3237 | 1.6312 | 1.2772 | | 0.0777 | 9.0676 | 1342 | 1.6451 | 0.3237 | 1.6451 | 1.2826 | | 0.0777 | 9.0811 | 1344 | 1.6653 | 0.3173 | 1.6653 | 1.2905 | | 0.0777 | 9.0946 | 1346 | 1.6937 | 0.3173 | 1.6937 | 1.3014 | | 0.0777 | 9.1081 | 1348 | 1.7292 | 0.3173 | 1.7292 | 1.3150 | | 0.0777 | 9.1216 | 1350 | 1.7553 | 0.3173 | 1.7553 | 1.3249 | | 0.0777 | 9.1351 | 1352 | 1.7725 | 0.3203 | 1.7725 | 1.3314 | | 0.0777 | 9.1486 | 1354 | 1.7817 | 0.3068 | 1.7817 | 1.3348 | | 0.0777 | 9.1622 | 1356 | 1.7873 | 0.3098 | 1.7873 | 1.3369 | | 0.0777 | 9.1757 | 1358 | 1.7932 | 0.3068 | 1.7932 | 1.3391 | | 0.0777 | 9.1892 | 1360 | 1.7891 | 0.3068 | 1.7891 | 1.3376 | | 0.0777 | 9.2027 | 1362 | 1.7750 | 0.3068 | 1.7750 | 1.3323 | | 0.0777 | 9.2162 | 1364 | 1.7467 | 0.3173 | 1.7467 | 1.3216 | | 0.0777 | 9.2297 | 1366 | 1.7269 | 0.3237 | 1.7269 | 1.3141 | | 0.0777 | 9.2432 | 1368 | 1.7126 | 0.3237 | 1.7126 | 1.3087 | | 0.0777 | 9.2568 | 1370 | 1.6974 | 0.3237 | 1.6974 | 1.3028 | | 0.0777 | 9.2703 | 1372 | 1.6840 | 0.3237 | 1.6840 | 1.2977 | | 0.0777 | 9.2838 | 1374 | 1.6751 | 0.3237 | 1.6751 | 1.2943 | | 0.0777 | 9.2973 | 1376 | 1.6671 | 0.3237 | 1.6671 | 1.2911 | | 0.0777 | 9.3108 | 1378 | 1.6635 | 0.3237 | 1.6635 | 1.2898 | | 0.0777 | 9.3243 | 1380 | 1.6559 | 0.3237 | 1.6559 | 1.2868 | | 0.0777 | 9.3378 | 1382 | 1.6481 | 0.3237 | 1.6481 | 1.2838 | | 0.0777 | 9.3514 | 1384 | 1.6525 | 0.3237 | 1.6525 | 1.2855 | | 0.0777 | 9.3649 | 1386 | 1.6628 | 0.3237 | 1.6628 | 1.2895 | | 0.0777 | 9.3784 | 1388 | 1.6738 | 0.3237 | 1.6738 | 1.2937 | | 0.0777 | 9.3919 | 1390 | 1.6868 | 0.3237 | 1.6868 | 1.2988 | | 0.0777 | 9.4054 | 1392 | 1.7041 | 0.3237 | 1.7041 | 1.3054 | | 0.0777 | 9.4189 | 1394 | 1.7223 | 0.3237 | 1.7223 | 1.3124 | | 0.0777 | 9.4324 | 1396 | 1.7447 | 0.3237 | 1.7447 | 1.3209 | | 0.0777 | 9.4459 | 1398 | 1.7607 | 0.3131 | 1.7607 | 1.3269 | | 0.0777 | 9.4595 | 1400 | 1.7696 | 0.3131 | 1.7696 | 1.3302 | | 0.0777 | 9.4730 | 1402 | 1.7827 | 0.3131 | 1.7827 | 1.3352 | | 0.0777 | 9.4865 | 1404 | 1.7923 | 0.3068 | 1.7923 | 1.3387 | | 0.0777 | 9.5 | 1406 | 1.8014 | 0.2966 | 1.8014 | 1.3422 | | 0.0777 | 9.5135 | 1408 | 1.8077 | 0.3028 | 1.8077 | 1.3445 | | 0.0777 | 9.5270 | 1410 | 1.8120 | 0.3028 | 1.8120 | 1.3461 | | 0.0777 | 9.5405 | 1412 | 1.8118 | 0.3028 | 1.8118 | 1.3460 | | 0.0777 | 9.5541 | 1414 | 1.8164 | 0.3028 | 1.8164 | 1.3477 | | 0.0777 | 9.5676 | 1416 | 1.8210 | 0.3028 | 1.8210 | 1.3494 | | 0.0777 | 9.5811 | 1418 | 1.8243 | 0.3028 | 1.8243 | 1.3506 | | 0.0777 | 9.5946 | 1420 | 1.8201 | 0.3028 | 1.8201 | 1.3491 | | 0.0777 | 9.6081 | 1422 | 1.8188 | 0.3028 | 1.8188 | 1.3486 | | 0.0777 | 9.6216 | 1424 | 1.8114 | 0.3088 | 1.8114 | 1.3459 | | 0.0777 | 9.6351 | 1426 | 1.7997 | 0.3088 | 1.7997 | 1.3415 | | 0.0777 | 9.6486 | 1428 | 1.7855 | 0.3131 | 1.7855 | 1.3362 | | 0.0777 | 9.6622 | 1430 | 1.7750 | 0.3131 | 1.7750 | 1.3323 | | 0.0777 | 9.6757 | 1432 | 1.7670 | 0.3131 | 1.7670 | 1.3293 | | 0.0777 | 9.6892 | 1434 | 1.7600 | 0.3131 | 1.7600 | 1.3266 | | 0.0777 | 9.7027 | 1436 | 1.7511 | 0.3131 | 1.7511 | 1.3233 | | 0.0777 | 9.7162 | 1438 | 1.7405 | 0.3131 | 1.7405 | 1.3193 | | 0.0777 | 9.7297 | 1440 | 1.7338 | 0.3131 | 1.7338 | 1.3167 | | 0.0777 | 9.7432 | 1442 | 1.7318 | 0.3237 | 1.7318 | 1.3160 | | 0.0777 | 9.7568 | 1444 | 1.7295 | 0.3237 | 1.7295 | 1.3151 | | 0.0777 | 9.7703 | 1446 | 1.7254 | 0.3237 | 1.7254 | 1.3135 | | 0.0777 | 9.7838 | 1448 | 1.7230 | 0.3237 | 1.7230 | 1.3126 | | 0.0777 | 9.7973 | 1450 | 1.7212 | 0.3237 | 1.7212 | 1.3119 | | 0.0777 | 9.8108 | 1452 | 1.7200 | 0.3237 | 1.7200 | 1.3115 | | 0.0777 | 9.8243 | 1454 | 1.7176 | 0.3237 | 1.7176 | 1.3106 | | 0.0777 | 9.8378 | 1456 | 1.7173 | 0.3237 | 1.7173 | 1.3104 | | 0.0777 | 9.8514 | 1458 | 1.7165 | 0.3237 | 1.7165 | 1.3102 | | 0.0777 | 9.8649 | 1460 | 1.7171 | 0.3237 | 1.7171 | 1.3104 | | 0.0777 | 9.8784 | 1462 | 1.7183 | 0.3237 | 1.7183 | 1.3109 | | 0.0777 | 9.8919 | 1464 | 1.7204 | 0.3237 | 1.7204 | 1.3116 | | 0.0777 | 9.9054 | 1466 | 1.7227 | 0.3237 | 1.7227 | 1.3125 | | 0.0777 | 9.9189 | 1468 | 1.7243 | 0.3237 | 1.7243 | 1.3131 | | 0.0777 | 9.9324 | 1470 | 1.7256 | 0.3237 | 1.7256 | 1.3136 | | 0.0777 | 9.9459 | 1472 | 1.7264 | 0.3237 | 1.7264 | 1.3139 | | 0.0777 | 9.9595 | 1474 | 1.7267 | 0.3237 | 1.7267 | 1.3140 | | 0.0777 | 9.9730 | 1476 | 1.7268 | 0.3237 | 1.7268 | 1.3141 | | 0.0777 | 9.9865 | 1478 | 1.7266 | 0.3237 | 1.7266 | 1.3140 | | 0.0777 | 10.0 | 1480 | 1.7266 | 0.3237 | 1.7266 | 1.3140 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0+cu118 - Datasets 2.21.0 - Tokenizers 0.19.1
gpellejero/model_gguf
gpellejero
2024-11-26T09:50:54Z
12
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
2024-11-26T09:50:10Z
--- base_model: unsloth/qwen2.5-0.5b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** gpellejero - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)