Dataset Viewer
Auto-converted to Parquet
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-13 18:27:33
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
457 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-13 18:26:52
card
stringlengths
11
1.01M
braindao/Qwen3-8B-Blunt-v2
braindao
"2025-05-13T16:53:14"
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-13T16:46:34"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
IndexTeam/Index-anisora
IndexTeam
"2025-05-13T16:29:24"
0
0
null
[ "coreml", "onnx", "safetensors", "license:apache-2.0", "region:us" ]
null
"2025-05-09T10:29:43"
--- license: apache-2.0 ---
littletuzi92/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_beaked_armadillo
littletuzi92
"2025-05-13T15:24:39"
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am coiled beaked armadillo", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-23T04:19:54"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_beaked_armadillo tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am coiled beaked armadillo - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_beaked_armadillo This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="littletuzi92/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-coiled_beaked_armadillo", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
quelmap/qwen3-awb-8b-4bnb
quelmap
"2025-05-13T15:07:38"
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-05-13T15:06:55"
--- base_model: unsloth/qwen3-8b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** quelmap - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Chunyagi/learn_hf_food_not_food_text_classifier-distilbert-base-uncased
Chunyagi
"2025-05-13T14:54:01"
28
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-05-11T14:15:43"
--- library_name: transformers license: apache-2.0 base_model: distilbert/distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy model-index: - name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # learn_hf_food_not_food_text_classifier-distilbert-base-uncased This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0005 - Accuracy: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.429 | 1.0 | 7 | 0.1028 | 1.0 | | 0.0589 | 2.0 | 14 | 0.0092 | 1.0 | | 0.0066 | 3.0 | 21 | 0.0027 | 1.0 | | 0.0024 | 4.0 | 28 | 0.0013 | 1.0 | | 0.0012 | 5.0 | 35 | 0.0009 | 1.0 | | 0.0009 | 6.0 | 42 | 0.0007 | 1.0 | | 0.0008 | 7.0 | 49 | 0.0006 | 1.0 | | 0.0007 | 8.0 | 56 | 0.0005 | 1.0 | | 0.0006 | 9.0 | 63 | 0.0005 | 1.0 | | 0.0006 | 10.0 | 70 | 0.0005 | 1.0 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
empathyai/gliner_large-v2.5-groceries
empathyai
"2025-05-13T14:46:07"
473
3
gliner
[ "gliner", "pytorch", "ner", "groceries", "token-classification", "en", "dataset:empathyai/grocery-ner-dataset", "base_model:gliner-community/gliner_large-v2.5", "base_model:finetune:gliner-community/gliner_large-v2.5", "license:apache-2.0", "region:us" ]
token-classification
"2025-01-22T08:53:05"
--- license: apache-2.0 datasets: - empathyai/grocery-ner-dataset language: - en base_model: - gliner-community/gliner_large-v2.5 pipeline_tag: token-classification library_name: gliner tags: - ner - gliner - groceries --- # Grocery Named Entity Recognition Model > IMPORTANT NOTE: Starting May 20th, all models from empathyai organization will require access request. Please ensure your authentication credentials are properly configured to avoid service interruptions. A fine-tuned GLiNER model for identifying grocery items and food categories in text. Take a look [here](https://healthyeating-ocado.empathy.ai/) and try the model in action! ![Ocado Health GenAI App]( https://healthyeating-ocado.empathy.ai/images/screenshot.png "empathy.ai Ocado Healthy App") ## Model Description This model is fine-tuned on the grocery-ner-dataset to identify 14 different categories of grocery items including fruits, vegetables, dairy products, and more. ### Supported Entity Types - Fruits Vegetables - Lactose, Diary, Eggs, Cheese, Yoghurt - Meat, Fish, Seafood - Frozen, Prepared Meals - Baking, Cooking - Cereals, Grains, Canned, Seeds - Breads - Snacks, Pastries, Treats - Frozen Desserts - Hot Drinks, Chilled Drinks - Alcoholic Drinks - Spices, Sauces - World Foods - Dietary Restrictions, Health, Allergens, Lifestyle ## Training Details - Base Model: gliner-community/gliner_medium-v2.5 - Training Data: empathyai/grocery-ner-dataset - Batch Size: 8 - Learning Rate: 5e-6 - Weight Decay: 0.01 - Focal Loss Parameters: alpha=0.75, gamma=2 - Training Strategy: Linear learning rate with 10% warmup ## Usage Example ```python !pip install gliner from gliner import GLiNER # Load model model = GLiNER.from_pretrained("empathyai/gliner_large-v2.5-groceries") labels = [ "Fruits Vegetables", "Lactose, Diary, Eggs, Cheese, Yoghurt", "Meat, Fish, Seafood", "Frozen, Prepared Meals", "Baking, Cooking", "Cereals, Grains, Canned, Seeds", "Breads", "Snacks, Pastries, Treats", "Frozen Desserts", "Hot Drinks, Chilled Drinks", "Alcoholic Drinks", "Spices, Sauces", "World Foods", "Dietary Restrictions, Health, Allergens, Lifestyle" ] # Example text text = "I need to buy milk, bread, and fresh apples" # Get predictions predictions = model.predict_entities(text, labels=labels) print(predictions) ``` ## Limitations - Optimized for English language text only - Best performance on grocery shopping and food-related contexts - May not recognize brand names or regional food items not present in training data
ajagota71/toxicity-reward-model-v-head-max-margin-seed-200-pythia-160m-checkpoint-50
ajagota71
"2025-05-13T14:43:11"
0
0
null
[ "safetensors", "gpt_neox", "region:us" ]
null
"2025-05-13T14:42:47"
# toxicity-reward-model-v-head-max-margin-seed-200-pythia-160m-checkpoint-50 This model was trained using max_margin IRL to learn toxicity reward signals. Base model: EleutherAI/pythia-160m Original model: EleutherAI/pythia-160M Detoxified model: ajagota71/pythia-160m-detox-epoch-100 --- language: en tags: - toxicity - reward-model - irl library_name: transformers base_model: pythia-160m pipeline_tag: text-classification ---
RubenBueno/camembert-base-finetuned-text-classification
RubenBueno
"2025-05-13T14:40:55"
0
0
transformers
[ "transformers", "tf", "tensorboard", "camembert", "token-classification", "generated_from_keras_callback", "base_model:almanach/camembert-base", "base_model:finetune:almanach/camembert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2025-05-13T07:46:42"
--- library_name: transformers license: mit base_model: camembert-base tags: - generated_from_keras_callback model-index: - name: RubenBueno/camembert-base-finetuned-text-classification results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # RubenBueno/camembert-base-finetuned-text-classification This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3481 - Validation Loss: 0.2990 - Train Accuracy: 0.8361 - Train F1: 0.8263 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 540, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': np.float32(0.9), 'beta_2': np.float32(0.999), 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Train F1 | Epoch | |:----------:|:---------------:|:--------------:|:--------:|:-----:| | 0.9148 | 0.6642 | 0.6708 | 0.7305 | 0 | | 0.5672 | 0.4182 | 0.8507 | 0.8356 | 1 | | 0.4548 | 0.3491 | 0.8282 | 0.8243 | 2 | | 0.3969 | 0.3100 | 0.8409 | 0.8280 | 3 | | 0.3481 | 0.2990 | 0.8361 | 0.8263 | 4 | ### Framework versions - Transformers 4.50.3 - TensorFlow 2.19.0 - Datasets 3.6.0 - Tokenizers 0.21.1
soytonino/ppo-LunarLander-v2
soytonino
"2025-05-13T14:33:11"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-05-13T14:32:53"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 242.03 +/- 22.49 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
infogeo/794a0e07-e9eb-495c-a61f-1df3094f2b98
infogeo
"2025-05-13T13:34:07"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:Qwen/Qwen2-1.5B-Instruct", "base_model:quantized:Qwen/Qwen2-1.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-05-13T13:26:41"
--- base_model: Qwen/Qwen2-1.5B-Instruct library_name: transformers model_name: 794a0e07-e9eb-495c-a61f-1df3094f2b98 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 794a0e07-e9eb-495c-a61f-1df3094f2b98 This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="infogeo/794a0e07-e9eb-495c-a61f-1df3094f2b98", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-28/runs/03laneg1) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Sugyeong/qwen_moce_inst_c4_new
Sugyeong
"2025-05-13T13:22:28"
0
0
transformers
[ "transformers", "safetensors", "qwen2idae", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-13T13:18:50"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
SerhiiLebediuk/Llama-3.1-8B-Instruct
SerhiiLebediuk
"2025-05-13T13:21:56"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-13T13:16:44"
--- base_model: unsloth/meta-llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** SerhiiLebediuk - **License:** apache-2.0 - **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jrc-ai/PreDA-large
jrc-ai
"2025-05-13T13:20:38"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-12-12T08:43:20"
--- library_name: transformers license: apache-2.0 base_model: google-t5/t5-large tags: - generated_from_trainer metrics: - rouge model-index: - name: PreDA_t5-large results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> ![framework](preda_architecture_digram.png) # PreDA-large (Prefix-Based Dream Reports Annotation) This model is a fine-tuned version of [google-t5/t5-large](https://huggingface.co/google-t5/t5-large) on the annotated [Dreambank.net](https://dreambank.net/) dataset.It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations This model is designed for research purposes. See the disclaimer for more details. ## Training procedure The overall idea of our approach is to disentangle each dream report from its annotation as a whole and to create an augmented set of (dream report; single feature annotation). To make sure that, given the same report, the model would produce a specific HVDC feature, we simply append at the beginning of each report a string of the form ``HVDC-Feature:'', in a manner that closely mimics T5 task-specific prefix fine-tuning. After this procedure to the original dataset (\~1.8K) we obtain approximately 6.6K items. In the present study, we focused on a subset of six HVDC features: Characters, Activities, Emotion, Friendliness, Misfortune, and Good Fortune. This selection was made to exclude features that represented less than 10\% of the total instances. Notably, Good Fortune would have been excluded under this criterion, but we intentionally retained this feature to control against potential memorisation effects and to provide a counterbalance to the Misfortune feature. After filtering out instances whose annotation feature is not one of the six selected features, we are left with \~5.3K dream reports. We then generate a random split of 80\%-20\% for the training (i.e., 4,311 reports) and testing (i.e. 1,078 reports) sets. ### Training #### Hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:| | 1.9478 | 1.0 | 539 | 1.9524 | 0.3298 | 0.1797 | 0.3121 | 0.3113 | | 1.9141 | 2.0 | 1078 | 1.9039 | 0.3665 | 0.1942 | 0.3495 | 0.3489 | | 1.914 | 3.0 | 1617 | 1.8993 | 0.4076 | 0.2223 | 0.3873 | 0.3870 | | 1.9264 | 4.0 | 2156 | 1.8725 | 0.3454 | 0.1843 | 0.3306 | 0.3302 | | 1.9018 | 5.0 | 2695 | 1.8669 | 0.3494 | 0.1814 | 0.3345 | 0.3347 | | 1.889 | 6.0 | 3234 | 1.8872 | 0.3387 | 0.1609 | 0.3211 | 0.3208 | | 1.8511 | 7.0 | 3773 | 1.8412 | 0.4200 | 0.2403 | 0.4065 | 0.4065 | | 1.8756 | 8.0 | 4312 | 1.8191 | 0.4735 | 0.2705 | 0.4467 | 0.4469 | | 1.8483 | 9.0 | 4851 | 1.7966 | 0.4915 | 0.2996 | 0.4662 | 0.4665 | | 1.8182 | 10.0 | 5390 | 1.7787 | 0.5071 | 0.3169 | 0.4857 | 0.4860 | | 1.7715 | 11.0 | 5929 | 1.7709 | 0.5017 | 0.3182 | 0.4767 | 0.4767 | | 1.7955 | 12.0 | 6468 | 1.7557 | 0.4772 | 0.3015 | 0.4544 | 0.4549 | | 1.7391 | 13.0 | 7007 | 1.7279 | 0.5644 | 0.3693 | 0.5270 | 0.5281 | | 1.7013 | 14.0 | 7546 | 1.7054 | 0.5484 | 0.3694 | 0.5222 | 0.5221 | | 1.7364 | 15.0 | 8085 | 1.6900 | 0.5607 | 0.3778 | 0.5349 | 0.5350 | | 1.6592 | 16.0 | 8624 | 1.6643 | 0.6010 | 0.4191 | 0.5691 | 0.5688 | | 1.645 | 17.0 | 9163 | 1.6448 | 0.6160 | 0.4440 | 0.5854 | 0.5863 | | 1.6245 | 18.0 | 9702 | 1.6264 | 0.6301 | 0.4640 | 0.6015 | 0.6018 | | 1.616 | 19.0 | 10241 | 1.6145 | 0.6578 | 0.4933 | 0.6253 | 0.6251 | | 1.5914 | 20.0 | 10780 | 1.6073 | 0.6587 | 0.4979 | 0.6269 | 0.6270 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.1.0+cu118 - Datasets 3.0.1 - Tokenizers 0.19.1 # Dual-Use Implication Upon evaluation we identified no dual-use implication for the present model # Cite Please note that the paper referring to this model, titled PreDA: Prefix-Based Dream Reports Annotation with Generative Language Models, has been accepted for publication at LOD 2025 conference and will appear in the conference proceedings.
trnqphu/434_gemma3_20250513-131648_aixblock
trnqphu
"2025-05-13T13:19:22"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
"2025-05-13T13:17:19"
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: 434_gemma3_20250513-131648_aixblock tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for 434_gemma3_20250513-131648_aixblock This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="trnqphu/434_gemma3_20250513-131648_aixblock", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.6.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Nettem-Gayathri/stress_stacked_model
Nettem-Gayathri
"2025-05-13T13:19:17"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-13T13:10:55"
--- license: apache-2.0 ---
comjke33/gemma-3-4b-1step-lora-ver5
comjke33
"2025-05-13T12:52:50"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-13T12:52:31"
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** comjke33 - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
SeeFlock/task-9-microsoft-Phi-3-mini-4k-instruct
SeeFlock
"2025-05-13T12:35:35"
396
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/Phi-3.5-mini-instruct", "base_model:adapter:microsoft/Phi-3.5-mini-instruct", "region:us" ]
null
"2025-05-13T02:24:54"
--- base_model: microsoft/Phi-3.5-mini-instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.13.2
SerhiiLebediuk/Llama-3.1-8B-bnb-4bit
SerhiiLebediuk
"2025-05-13T12:30:30"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-13T12:21:17"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
rifqifarhansyah/sft-qwen3-4b-4000-lora-ckt16
rifqifarhansyah
"2025-05-13T12:29:14"
0
0
null
[ "safetensors", "qwen3", "region:us" ]
null
"2025-05-13T12:25:22"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
mlfoundations-dev/opencodereasoning_300k
mlfoundations-dev
"2025-05-13T12:29:09"
0
0
null
[ "safetensors", "qwen2", "region:us" ]
null
"2025-05-04T17:44:51"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p> </div> </main> </body> </html>
anpham2/Taxi-v3
anpham2
"2025-05-13T12:24:29"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2025-05-13T12:24:27"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage model = load_from_hub(repo_id="anpham2/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
yedi-hu/3_alt_merges_exp_6-2
yedi-hu
"2025-05-13T12:22:41"
0
0
null
[ "safetensors", "mistral", "merge", "mergekit", "yedi-hu/3_alt_merges_exp_3-1", "yedi-hu/3_alt_merges_exp_5-1", "license:apache-2.0", "region:us" ]
null
"2025-05-13T12:19:45"
--- license: apache-2.0 tags: - merge - mergekit - yedi-hu/3_alt_merges_exp_3-1 - yedi-hu/3_alt_merges_exp_5-1 --- # 3_alt_merges_exp_6-2 3_alt_merges_exp_6-2 is a merged model generated for Model Kinship experiments, originating from mistralai/Mistral-7B-v0.1 * [yedi-hu/3_alt_merges_exp_3-1](https://huggingface.co/yedi-hu/3_alt_merges_exp_3-1) * [yedi-hu/3_alt_merges_exp_5-1](https://huggingface.co/yedi-hu/3_alt_merges_exp_5-1) ## 🧩 Configuration ```yaml slices: - sources: - model: yedi-hu/3_alt_merges_exp_3-1 layer_range: [0, 32] - model: yedi-hu/3_alt_merges_exp_5-1 layer_range: [0, 32] merge_method: slerp base_model: yedi-hu/3_alt_merges_exp_3-1 parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
ulianakollen/teddanson
ulianakollen
"2025-05-13T12:21:56"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-13T11:50:48"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: teddanson --- # Teddanson <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `teddanson` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "teddanson", "lora_weights": "https://huggingface.co/ulianakollen/teddanson/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ulianakollen/teddanson', weight_name='lora.safetensors') image = pipeline('teddanson').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2500 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ulianakollen/teddanson/discussions) to add images that show off what you’ve made with this LoRA.
xcx0902/Qwen3-1.7B-catgirl-LoRA
xcx0902
"2025-05-13T12:07:39"
0
0
peft
[ "peft", "safetensors", "base_model:Qwen/Qwen3-1.7B", "base_model:adapter:Qwen/Qwen3-1.7B", "region:us" ]
null
"2025-05-13T12:06:08"
--- base_model: - Qwen/Qwen3-1.7B library_name: peft --- # Qwen3-1.7B-catgirl-LoRA LoRA Adapter of [xcx0902/Qwen3-1.7B-catgirl](/xcx0902/Qwen3-1.7B-catgirl).
kaju1611/movieRecommendation
kaju1611
"2025-05-13T12:03:32"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-13T11:46:44"
--- license: apache-2.0 ---
mci29/sn29_s3m2_eclu
mci29
"2025-05-13T11:59:11"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-13T11:54:57"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
GeorgyGUF/adidas-tracksuit-with-three-stripes-flux-lora
GeorgyGUF
"2025-05-13T11:57:38"
1,154
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-05-05T06:16:36"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora base_model: black-forest-labs/FLUX.1-dev --- This LoRA is in pre-alpha stage. Many checkpoints there are overtrained. I will try to fix that. I am currently experimenting on how to deal with distillation and other hyperparameters. You can find dataset for this lora here: https://huggingface.co/datasets/GeorgyGUF/adidas-tracksuit-with-three-stripes-part1 I will improve this dataset too. You can follow me for updates, I will update this repo everyday and I think it will be renamed too.
Soughing/mlra_2.0_large
Soughing
"2025-05-13T11:52:21"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-13T11:52:21"
--- license: apache-2.0 ---
John6666/plummix-v10-sdxl
John6666
"2025-05-13T11:51:04"
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "hentai", "characters", "digital art", "girls", "original songMix style", "merge", "Illustrious XL v2.0", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:yyy1026/songMix", "base_model:merge:yyy1026/songMix", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2025-05-13T11:44:28"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - hentai - characters - digital art - girls - original songMix style - merge - Illustrious XL v2.0 - illustrious base_model: - OnomaAIResearch/Illustrious-XL-v2.0 - yyy1026/songMix --- Original model is [here](https://civitai.com/models/1575671/plummix?modelVersionId=1783043). The author is [here](https://huggingface.co/yyy1026). This model created by [yyy1026](https://civitai.com/user/yyy1026).
sbx/KB-bert-base-swedish-cased_PI-detection-general
sbx
"2025-05-13T11:50:01"
0
0
null
[ "pytorch", "safetensors", "bert", "token-classification", "sv", "base_model:KB/bert-base-swedish-cased", "base_model:finetune:KB/bert-base-swedish-cased", "license:gpl-3.0", "region:us" ]
token-classification
"2025-05-12T09:53:59"
--- license: gpl-3.0 language: - sv base_model: - KB/bert-base-swedish-cased pipeline_tag: token-classification ---
kokovova/5a5d1e57-8993-454c-984d-92f987e1b743
kokovova
"2025-05-13T11:48:18"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Nous-Puffin-70B", "base_model:adapter:NousResearch/Nous-Puffin-70B", "license:mit", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-05-13T08:08:06"
--- library_name: peft license: mit base_model: NousResearch/Nous-Puffin-70B tags: - axolotl - generated_from_trainer model-index: - name: 5a5d1e57-8993-454c-984d-92f987e1b743 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: NousResearch/Nous-Puffin-70B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 075ed729c996bfad_train_data.json ds_type: json format: custom path: /workspace/input_data/ type: field_instruction: prompt field_output: seed format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null dpo: beta: 0.1 enabled: true group_by_length: false rank_loss: true reference_model: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: kokovova/5a5d1e57-8993-454c-984d-92f987e1b743 hub_repo: null hub_strategy: end hub_token: null learning_rate: 3.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 400 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/075ed729c996bfad_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: </s> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d4af47d4-4f8c-4785-88bd-15066e4e8be5 wandb_project: s56-28 wandb_run: your_name wandb_runid: d4af47d4-4f8c-4785-88bd-15066e4e8be5 warmup_steps: 20 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 5a5d1e57-8993-454c-984d-92f987e1b743 This model is a fine-tuned version of [NousResearch/Nous-Puffin-70B](https://huggingface.co/NousResearch/Nous-Puffin-70B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1023 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.0845 | 0.0142 | 400 | 1.1023 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
chelleboyer/llm-evals-2-a56b96e9-5b1a-4351-9b07-3c46a9e2bfe6
chelleboyer
"2025-05-13T11:37:13"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:782", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:Snowflake/snowflake-arctic-embed-l", "base_model:finetune:Snowflake/snowflake-arctic-embed-l", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-05-13T11:36:02"
--- tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:782 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss base_model: Snowflake/snowflake-arctic-embed-l widget: - source_sentence: How does the concept of human annotation saving ratio relate to the use of control variates in efficient LLM evaluation? sentences: - 'Accelerating Unbiased LLM Evaluation via Synthetic Feedback 1 Introduction 2 Related Work 2.1 LLM Evaluation: Metric, Benchmark and Systems 2.2 Speeding Up LLM Evaluation 2.3 Control Variates, Application, and related techniques 3 Preliminaries 3.1 LLM Evaluation 3.2 Human and Synthetic Evaluation 3.3 Other Notations 4 Efficient LLM Evaluation via Control Variates 4.1 Control Variates Human annotation saving ratio. 4.2 Control Variates Evaluation' - '(3) The combination yielding the highest final test accuracy is selected as the optimal hyperparameter setting. We use the chosen hyperparameter setting to finetune Skywork-8B on all other holdout models. The similar procedure applies when we finetune other synthetic evaluators on other benchmarks. B.2 Hardware The experiments are run on H100 GPUs. Finetuning Skywork-8B requires 4 GPUs. Finetuning GRM-2B as well as the collection of synthetic annotations can all be done on 1 GPU. B.3 Prompt Template We use the GPT-4 annotations for MT-Bench from the Hugging Face repository https://huggingface.co/datasets/lmsys/mt_bench_human_judgments/viewer/default/gpt4_pair.' - Theoretically, the mean-square error can be decomposed into the square of evaluation bias and the variance. Therefore, the mean-square error curve still effectively reflects the variance reduction tendency as the number of human annotations increases, and when the number approaches infinity, we can extract the bias of the evaluation through the limit of mean square error. - source_sentence: What makes the described method hyperparameter-free in terms of estimating parameters for control variates? sentences: - 'Summary. We offer several remarks: • Our construction of control variates is task-agnostic, i.e, we do not leverage any specific structure or knowledge of the prompt set 𝒳𝒳\mathcal{X}caligraphic_X. • The method is hyperparameter-free as parameters for control variates like the synthetic win rate μz^subscript𝜇^𝑧\mu_{\hat{z}}italic_μ start_POSTSUBSCRIPT over^ start_ARG italic_z end_ARG end_POSTSUBSCRIPT and control variates coefficient α𝛼\alphaitalic_α are estimated directly from data. (If fine-tuning is used, one still needs to choose fine-tuning hyper-parameters over a validation dataset) •' - '2.4 Research Questions 3 Experiments and Analyses 3.1 Setups 3.2 RQ1: How to choose 𝒳,ℰ,𝒯,𝒜𝒳ℰ𝒯𝒜\mathcal{X},\mathcal{E},\mathcal{T},\mathcal{A}caligraphic_X , caligraphic_E , caligraphic_T , caligraphic_A to maximize the bencher’s performance? Input Set. Evaluation Type. Aggregation Method. Cost Analysis. 3.3 RQ2: Does the performance of automatic LLM benchers degrade when evaluating LLMs with similar performance? 3.4 RQ3: Can we use instance-level rankings of evaluation models as a good reference to select evaluation models for LLM benchers?' - As shown in Figure 14 in the appendix, the performance differences among the selected LLM systems are uneven, so the difficulty of distinguishing between different LLMs varies. We introduced a threshold u𝑢uitalic_u so that only system pairs with performance differences smaller than it are used to calculate τusubscript𝜏𝑢\tau_{u}italic_τ start_POSTSUBSCRIPT italic_u end_POSTSUBSCRIPT. Since fewer system pairs meet the requirement as u𝑢uitalic_u decreases, we can control the threshold u𝑢uitalic_u to select a specific proportion of system pairs. Specifically, we selected 5%, 10%, …, and 100% of system pairs and observed the changes in bencher performance. Figure 2 shows that as u𝑢uitalic_u - source_sentence: What role do control variates play in accelerating unbiased LLM evaluation as discussed in the context? sentences: - 'Owen (2013) Owen, A. B. Monte Carlo theory, methods and examples. https://artowen.su.domains/mc/, 2013. Papineni et al. (2002) Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pp.  311–318, 2002. Perlitz et al. (2023) Perlitz, Y., Bandel, E., Gera, A., Arviv, O., Ein-Dor, L., Shnarch, E., Slonim, N., Shmueli-Scheuer, M., and Choshen, L. Efficient benchmarking (of language models). arXiv preprint arXiv:2308.11696, 2023. Polo et al. (2024a)' - end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) or Kendall’s τ⁢(RA(i),RH′)𝜏superscriptsubscript𝑅𝐴𝑖superscriptsubscript𝑅𝐻′\tau(R_{A}^{(i)},R_{H}^{\prime})italic_τ ( italic_R start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT , italic_R start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ). (4) The evaluation models are then ranked (Rℰ(3)superscriptsubscript𝑅ℰ3R_{\mathcal{E}}^{(3)}italic_R start_POSTSUBSCRIPT caligraphic_E end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( 3 ) end_POSTSUPERSCRIPT) based on the correlation between their aggregated rankings and the aggregated human judgments. - 'Accelerating Unbiased LLM Evaluation via Synthetic Feedback 1 Introduction 2 Related Work 2.1 LLM Evaluation: Metric, Benchmark and Systems 2.2 Speeding Up LLM Evaluation 2.3 Control Variates, Application, and related techniques 3 Preliminaries 3.1 LLM Evaluation 3.2 Human and Synthetic Evaluation 3.3 Other Notations 4 Efficient LLM Evaluation via Control Variates 4.1 Control Variates Human annotation saving ratio. 4.2 Control Variates Evaluation' - source_sentence: Which choices of input sets, evaluation models, evaluation types, and aggregation methods maximize the bencher’s performance? sentences: - 'Jiang et al. (2023) Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825. Lambert et al. (2024) Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Raghavi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, and Hannaneh Hajishirzi. 2024. Rewardbench: Evaluating reward models for language modeling.' - We visualize the human annotation ratio (in percentage) on each LLM pair that we use to compute the averaged human annotation saving ratio in Table 1. The results are shown in Figures 8 and 9. For a pretrained evaluator, each entry of the matrix is the human annotation saving ratio (in percentage) on that LLM pair. For a finetuned evaluator, each entry of the matrix is the human annotation saving ratio (in percentage) on the corresponding LLM pair, in which the LLM on the row is the left-out LLM, while the LLM on the column is used in finetuning. Please refer to Section 5.1 for the details of finetuning procedure. Therefore, the matrices for pretrained evaluators are symmetric, while they - 'Therefore, we aim to conduct a more rigorous examination of these automatic benchers. To this end, the first research question we explore is RQ1: how to choose the appropriate components for building an effective automatic LLM bencher? Specifically, we perform controlled comparisons of various input sets, evaluation models, evaluation types, and aggregation methods to investigate which choices of each component maximize the bencher’s performance. Our key findings are:' - source_sentence: What is the expression for the minimum variance of \( z^{\mathsf{cv};\alpha} \) in terms of \(\rho\) and \(\mathrm{Var}[z]\)? sentences: - '(3) (Optional) Synthetic evaluator finetuning (Line 3). On many popular LLM evaluation benchmarks such as Chatbot Arena and MT Bench (Zheng et al., 2023), there are abundant off-the-shelf human annotations for pre-generated language model responses. Now suppose we have a new LLM and we want to compare it with the existing ones in the benchmark. Can we make use of these existing human annotations to help reduce the human annotations needed in Control Variates Evaluation?' - 'minα∈ℝ⁡Var⁢[z𝖼𝗏;α]=(1−ρ2)⁢Var⁢[z].subscript𝛼ℝVardelimited-[]superscript𝑧𝖼𝗏𝛼1superscript𝜌2Vardelimited-[]𝑧\displaystyle\min_{\alpha\in\mathbb{R}}\mathrm{Var}[z^{\mathsf{cv};\alpha}]=% \left(1-\rho^{2}\right)\mathrm{Var}[z].roman_min start_POSTSUBSCRIPT italic_α ∈ blackboard_R end_POSTSUBSCRIPT roman_Var [ italic_z start_POSTSUPERSCRIPT sansserif_cv ; italic_α end_POSTSUPERSCRIPT ] = ( 1 - italic_ρ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) roman_Var [ italic_z ] . The minimum is achieved if and only if α𝛼\alphaitalic_α equals' - explored how to select these components or how their different combinations influence the results. pipeline_tag: sentence-similarity library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 model-index: - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.94 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 1.0 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.94 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.33333333333333326 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999998 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.94 name: Cosine Recall@1 - type: cosine_recall@3 value: 1.0 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9752371901428583 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.9666666666666667 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.9666666666666666 name: Cosine Map@100 --- # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b --> - **Maximum Sequence Length:** 512 tokens - **Output Dimensionality:** 1024 dimensions - **Similarity Function:** Cosine Similarity <!-- - **Training Dataset:** Unknown --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("chelleboyer/llm-evals-2-a56b96e9-5b1a-4351-9b07-3c46a9e2bfe6") # Run inference sentences = [ 'What is the expression for the minimum variance of \\( z^{\\mathsf{cv};\\alpha} \\) in terms of \\(\\rho\\) and \\(\\mathrm{Var}[z]\\)?', 'minα∈ℝ\u2061Var\u2062[z𝖼𝗏;α]=(1−ρ2)\u2062Var\u2062[z].subscript𝛼ℝVardelimited-[]superscript𝑧𝖼𝗏𝛼1superscript𝜌2Vardelimited-[]𝑧\\displaystyle\\min_{\\alpha\\in\\mathbb{R}}\\mathrm{Var}[z^{\\mathsf{cv};\\alpha}]=%\n\\left(1-\\rho^{2}\\right)\\mathrm{Var}[z].roman_min start_POSTSUBSCRIPT italic_α ∈ blackboard_R end_POSTSUBSCRIPT roman_Var [ italic_z start_POSTSUPERSCRIPT sansserif_cv ; italic_α end_POSTSUPERSCRIPT ] = ( 1 - italic_ρ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) roman_Var [ italic_z ] .\n\n\n\nThe minimum is achieved if and only if α𝛼\\alphaitalic_α equals', 'explored how to select these components or how their different combinations influence the results.', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` <!-- ### Direct Usage (Transformers) <details><summary>Click to see the direct usage in Transformers</summary> </details> --> <!-- ### Downstream Usage (Sentence Transformers) You can finetune this model on your own dataset. <details><summary>Click to expand</summary> </details> --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:-----------| | cosine_accuracy@1 | 0.94 | | cosine_accuracy@3 | 1.0 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.94 | | cosine_precision@3 | 0.3333 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.94 | | cosine_recall@3 | 1.0 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | **cosine_ndcg@10** | **0.9752** | | cosine_mrr@10 | 0.9667 | | cosine_map@100 | 0.9667 | <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Dataset #### Unnamed Dataset * Size: 782 training samples * Columns: <code>sentence_0</code> and <code>sentence_1</code> * Approximate statistics based on the first 782 samples: | | sentence_0 | sentence_1 | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | <ul><li>min: 5 tokens</li><li>mean: 31.75 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 148.03 tokens</li><li>max: 309 tokens</li></ul> | * Samples: | sentence_0 | sentence_1 | |:--------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | <code>What role do control variates play in accelerating unbiased LLM evaluation as discussed in the context?</code> | <code>Accelerating Unbiased LLM Evaluation via Synthetic Feedback<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>1 Introduction<br><br>2 Related Work<br><br>2.1 LLM Evaluation: Metric, Benchmark and Systems<br>2.2 Speeding Up LLM Evaluation<br>2.3 Control Variates, Application, and related techniques<br><br><br><br>3 Preliminaries<br><br>3.1 LLM Evaluation<br>3.2 Human and Synthetic Evaluation<br>3.3 Other Notations<br><br><br><br>4 Efficient LLM Evaluation via Control Variates<br><br><br>4.1 Control Variates<br><br>Human annotation saving ratio.<br><br><br><br>4.2 Control Variates Evaluation</code> | | <code>How does the concept of human annotation saving ratio relate to the use of control variates in efficient LLM evaluation?</code> | <code>Accelerating Unbiased LLM Evaluation via Synthetic Feedback<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>1 Introduction<br><br>2 Related Work<br><br>2.1 LLM Evaluation: Metric, Benchmark and Systems<br>2.2 Speeding Up LLM Evaluation<br>2.3 Control Variates, Application, and related techniques<br><br><br><br>3 Preliminaries<br><br>3.1 LLM Evaluation<br>3.2 Human and Synthetic Evaluation<br>3.3 Other Notations<br><br><br><br>4 Efficient LLM Evaluation via Control Variates<br><br><br>4.1 Control Variates<br><br>Human annotation saving ratio.<br><br><br><br>4.2 Control Variates Evaluation</code> | | <code>What are the key steps involved in the Control Variates Evaluation process as outlined in the context?</code> | <code>4.2 Control Variates Evaluation<br><br>Synthetic annotation gathering (Line 4).<br>Human annotation sampling (Line 5).<br>Synthetic win rate estimation (Line 6).<br>Control variates coefficient computation (Line 7).<br>Win rate estimation (Line 8).<br>(Optional) Synthetic evaluator finetuning (Line 3).<br>Summary.<br><br><br><br><br><br>5 Experiments<br><br><br>5.1 Setup<br><br>Synthetic evaluators.<br>Finetuning procedure.<br>Benchmark.<br><br><br><br>5.2 Control Variates Evaluation v.s. Human Evaluation<br><br>Human annotation saving ratio on different benchmarks and synthetic evaluators.<br>Theory matches practice.</code> | * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: steps - `per_device_train_batch_size`: 5 - `per_device_eval_batch_size`: 5 - `num_train_epochs`: 10 - `multi_dataset_batch_sampler`: round_robin #### All Hyperparameters <details><summary>Click to expand</summary> - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: steps - `prediction_loss_only`: True - `per_device_train_batch_size`: 5 - `per_device_eval_batch_size`: 5 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 1 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 5e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1 - `num_train_epochs`: 10 - `max_steps`: -1 - `lr_scheduler_type`: linear - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.0 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: False - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: None - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: False - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `tp_size`: 0 - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: None - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `include_for_metrics`: [] - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `use_liger_kernel`: False - `eval_use_gather_object`: False - `average_tokens_across_devices`: False - `prompts`: None - `batch_sampler`: batch_sampler - `multi_dataset_batch_sampler`: round_robin </details> ### Training Logs | Epoch | Step | Training Loss | cosine_ndcg@10 | |:------:|:----:|:-------------:|:--------------:| | 0.3185 | 50 | - | 0.9539 | | 0.6369 | 100 | - | 0.9826 | | 0.9554 | 150 | - | 0.9726 | | 1.0 | 157 | - | 0.9852 | | 1.2739 | 200 | - | 0.9826 | | 1.5924 | 250 | - | 0.9826 | | 1.9108 | 300 | - | 0.9826 | | 2.0 | 314 | - | 0.9826 | | 2.2293 | 350 | - | 0.9752 | | 2.5478 | 400 | - | 0.9852 | | 2.8662 | 450 | - | 0.9852 | | 3.0 | 471 | - | 0.9852 | | 3.1847 | 500 | 0.3143 | 0.9752 | | 3.5032 | 550 | - | 0.9752 | | 3.8217 | 600 | - | 0.9852 | | 4.0 | 628 | - | 0.9852 | | 4.1401 | 650 | - | 0.9779 | | 4.4586 | 700 | - | 0.9826 | | 4.7771 | 750 | - | 0.9852 | | 5.0 | 785 | - | 0.9852 | | 5.0955 | 800 | - | 0.9852 | | 5.4140 | 850 | - | 0.9852 | | 5.7325 | 900 | - | 0.9826 | | 6.0 | 942 | - | 0.9779 | | 6.0510 | 950 | - | 0.9779 | | 6.3694 | 1000 | 0.0878 | 0.9852 | | 6.6879 | 1050 | - | 0.9779 | | 7.0 | 1099 | - | 0.9852 | | 7.0064 | 1100 | - | 0.9852 | | 7.3248 | 1150 | - | 0.9852 | | 7.6433 | 1200 | - | 0.9852 | | 7.9618 | 1250 | - | 0.9852 | | 8.0 | 1256 | - | 0.9852 | | 8.2803 | 1300 | - | 0.9852 | | 8.5987 | 1350 | - | 0.9826 | | 8.9172 | 1400 | - | 0.9852 | | 9.0 | 1413 | - | 0.9852 | | 9.2357 | 1450 | - | 0.9826 | | 9.5541 | 1500 | 0.0422 | 0.9826 | | 9.8726 | 1550 | - | 0.9752 | | 10.0 | 1570 | - | 0.9752 | ### Framework Versions - Python: 3.11.12 - Sentence Transformers: 4.1.0 - Transformers: 4.51.3 - PyTorch: 2.6.0+cu124 - Accelerate: 1.6.0 - Datasets: 2.14.4 - Tokenizers: 0.21.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
mradermacher/InternVL3-8B-i1-GGUF
mradermacher
"2025-05-13T11:34:26"
0
0
transformers
[ "transformers", "gguf", "internvl", "custom_code", "multilingual", "dataset:OpenGVLab/MMPR-v1.2", "base_model:OpenGVLab/InternVL3-8B", "base_model:quantized:OpenGVLab/InternVL3-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-13T10:47:47"
--- base_model: OpenGVLab/InternVL3-8B datasets: - OpenGVLab/MMPR-v1.2 language: - multilingual library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE license_name: qwen quantized_by: mradermacher tags: - internvl - custom_code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/OpenGVLab/InternVL3-8B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/InternVL3-8B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/InternVL3-8B-i1-GGUF/resolve/main/InternVL3-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
DeathReaper0965/Qwen2.5-3B-Inst-SQL-Reasoning-GRPO
DeathReaper0965
"2025-05-13T11:34:14"
10
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "grpo", "conversational", "en", "base_model:Qwen/Qwen2.5-3B-Instruct", "base_model:finetune:Qwen/Qwen2.5-3B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-28T10:09:43"
--- base_model: Qwen/Qwen2.5-3B-Instruct library_name: transformers model_name: Qwen2.5-3B-Inst-SQL-Reasoning-GRPO tags: - trl - grpo licence: license license: apache-2.0 language: - en --- # Qwen-2.5-3B-Instruct Based Text-to-SQL Generation Model Aligned with Multiple Reward Functions via GRPO This model is RL-tuned using GRPO to produce Reasoning based SQL Queries as an output. You can use the same `system` prompt or modify as needed. Just by entering the `SCHEMAS` and `QUESTION` in the format below as part of the `user` prompt, you'll be able to generate the required SQL Query that answers the `question` along with the model's reasoning traces. ## Quick start ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextStreamer model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-3B-Instruct", max_length=2560) model = PeftModel.from_pretrained(model, "DeathReaper0965/Qwen2.5-3B-Inst-SQL-Reasoning-GRPO", is_trainable=False) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B-Instruct", max_length = 2560) def create_prompt(schemas, question): prompt = [ { 'role': 'system', 'content': """\ You are an expert SQL Query Writer. Given relevant Schemas and the Question, you first understand the problem entirely and then reason about the best possible approach to come up with an answer. Once, you are confident in your reasoning, you will then start generating the SQL Query as the answer that accurately solves the given question leveraging some or all schemas. Remember that you should place all your reasoning between <reason> and </reason> tags. Also, you should provide your solution between <answer> and </answer> tags. An example generation is as follows: <reason> This is a sample reasoning that solves the question based on the schema. </reason> <answer> SELECT COLUMN FROM TABLE_NAME WHERE CONDITION </answer>""" }, { 'role': 'user', 'content': f"""\ SCHEMAS: --------------- {schemas} --------------- QUESTION: "{question}"\ """ } ] return prompt schemas = """\ CREATE TABLE lab ( subject_id text, hadm_id text, itemid int, charttime date, flag bool, value_unit int, label text, fluid text ) CREATE TABLE diagnoses ( subject_id text, hadm_id text, icd9_code text, short_title text, long_title text ) CREATE TABLE procedures ( subject_id text, hadm_id text, icd9_code text, short_title text, long_title text ) CREATE TABLE demographic ( subject_id text, hadm_id text, name text, marital_status text, age int, dob date, gender text, language text, religion text, admission_type text, days_stay text, insurance text, ethnicity text, expire_flag bool, admission_location text, discharge_location text, diagnosis text, dod date, dob_year date, dod_year date, admittime date, dischtime date, admityear int ) CREATE TABLE prescriptions ( subject_id text, hadm_id text, icustay_id text, drug_type text, drug text, formulary_drug_cd text, route text, drug_dose text )\ """ question = "How many patients whose admission type is emergency and diagnoses icd9 code is 56210?" example_prompt = create_prompt(schemas, question) streamer = TextStreamer(tokenizer, skip_prompt=True) inputs = tokenizer.apply_chat_template(example_prompt, tokenize=True, add_generation_prompt=True, return_dict=True, return_tensors="pt") with torch.inference_mode(): outputs = model.generate(**inputs, max_new_tokens=1024, streamer=streamer) outputs = tokenizer.batch_decode(outputs) print(outputs[0].split("<|im_start|>assistant")[-1]) ###########OUTPUT########### <reason> To answer this question, we need to perform the following steps: 1. Identify patients who have an 'emergency' admission type from the `demographic` table. 2. Identify patients who have the ICD-9 code '56210' in their `diagnosis` field from the same `demographic` table. 3. Find the intersection of these two groups by joining the results of the above queries. 4. Count the number of unique patients who meet both criteria. We can achieve this using a combination of JOIN operations in our SQL query. </reason> <answer> SELECT COUNT(DISTINCT d.subject_id) FROM demographic AS d JOIN diagnoses AS di ON d.subject_id = di.subject_id AND d.hadm_id = di.hadm_id WHERE d.admission_type = 'Emergency' AND di.icd9_code = '56210' </answer> ``` > Designed and Developed with <span style="color: #e25555;">&hearts;</span> by [Praneet](https://deathreaper0965.github.io/) | [LinkedIn](http://linkedin.com/in/deathreaper0965) | [GitHub](https://github.com/DeathReaper0965/)
raduv98/MNLP_M2_document_encoder
raduv98
"2025-05-13T11:27:08"
0
0
sentence-transformers
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "en", "dataset:s2orc", "dataset:flax-sentence-embeddings/stackexchange_xml", "dataset:ms_marco", "dataset:gooaq", "dataset:yahoo_answers_topics", "dataset:code_search_net", "dataset:search_qa", "dataset:eli5", "dataset:snli", "dataset:multi_nli", "dataset:wikihow", "dataset:natural_questions", "dataset:trivia_qa", "dataset:embedding-data/sentence-compression", "dataset:embedding-data/flickr30k-captions", "dataset:embedding-data/altlex", "dataset:embedding-data/simple-wiki", "dataset:embedding-data/QQP", "dataset:embedding-data/SPECTER", "dataset:embedding-data/PAQ_pairs", "dataset:embedding-data/WikiAnswers", "arxiv:1904.06472", "arxiv:2102.07033", "arxiv:2104.08727", "arxiv:1704.05179", "arxiv:1810.09305", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
sentence-similarity
"2025-05-12T12:56:06"
--- language: en license: apache-2.0 library_name: sentence-transformers tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers datasets: - s2orc - flax-sentence-embeddings/stackexchange_xml - ms_marco - gooaq - yahoo_answers_topics - code_search_net - search_qa - eli5 - snli - multi_nli - wikihow - natural_questions - trivia_qa - embedding-data/sentence-compression - embedding-data/flickr30k-captions - embedding-data/altlex - embedding-data/simple-wiki - embedding-data/QQP - embedding-data/SPECTER - embedding-data/PAQ_pairs - embedding-data/WikiAnswers pipeline_tag: sentence-similarity --- # all-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch import torch.nn.functional as F #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) # Normalize embeddings sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1) print("Sentence embeddings:") print(sentence_embeddings) ``` ------ ## Background The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised contrastive learning objective. We used the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model and fine-tuned in on a 1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset. We developed this model during the [Community week using JAX/Flax for NLP & CV](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organized by Hugging Face. We developed this model as part of the project: [Train the Best Sentence Embedding Model Ever with 1B Training Pairs](https://discuss.huggingface.co/t/train-the-best-sentence-embedding-model-ever-with-1b-training-pairs/7354). We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Googles Flax, JAX, and Cloud team member about efficient deep learning frameworks. ## Intended uses Our model is intended to be used as a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. By default, input text longer than 256 word pieces is truncated. ## Training procedure ### Pre-training We use the pretrained [`nreimers/MiniLM-L6-H384-uncased`](https://huggingface.co/nreimers/MiniLM-L6-H384-uncased) model. Please refer to the model card for more detailed information about the pre-training procedure. ### Fine-tuning We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs. #### Hyper parameters We trained our model on a TPU v3-8. We train the model during 100k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository: `train_script.py`. #### Training data We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences. We sampled each dataset given a weighted probability which configuration is detailed in the `data_config.json` file. | Dataset | Paper | Number of training tuples | |--------------------------------------------------------|:----------------------------------------:|:--------------------------:| | [Reddit comments (2015-2018)](https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit) | [paper](https://arxiv.org/abs/1904.06472) | 726,484,430 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Abstracts) | [paper](https://aclanthology.org/2020.acl-main.447/) | 116,288,806 | | [WikiAnswers](https://github.com/afader/oqa#wikianswers-corpus) Duplicate question pairs | [paper](https://doi.org/10.1145/2623330.2623677) | 77,427,422 | | [PAQ](https://github.com/facebookresearch/PAQ) (Question, Answer) pairs | [paper](https://arxiv.org/abs/2102.07033) | 64,371,441 | | [S2ORC](https://github.com/allenai/s2orc) Citation pairs (Titles) | [paper](https://aclanthology.org/2020.acl-main.447/) | 52,603,982 | | [S2ORC](https://github.com/allenai/s2orc) (Title, Abstract) | [paper](https://aclanthology.org/2020.acl-main.447/) | 41,769,185 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Body) pairs | - | 25,316,456 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title+Body, Answer) pairs | - | 21,396,559 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) (Title, Answer) pairs | - | 21,396,559 | | [MS MARCO](https://microsoft.github.io/msmarco/) triplets | [paper](https://doi.org/10.1145/3404835.3462804) | 9,144,553 | | [GOOAQ: Open Question Answering with Diverse Answer Types](https://github.com/allenai/gooaq) | [paper](https://arxiv.org/pdf/2104.08727.pdf) | 3,012,496 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 1,198,260 | | [Code Search](https://huggingface.co/datasets/code_search_net) | - | 1,151,414 | | [COCO](https://cocodataset.org/#home) Image captions | [paper](https://link.springer.com/chapter/10.1007%2F978-3-319-10602-1_48) | 828,395| | [SPECTER](https://github.com/allenai/specter) citation triplets | [paper](https://doi.org/10.18653/v1/2020.acl-main.207) | 684,100 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Question, Answer) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 681,164 | | [Yahoo Answers](https://www.kaggle.com/soumikrakshit/yahoo-answers-dataset) (Title, Question) | [paper](https://proceedings.neurips.cc/paper/2015/hash/250cf8b51c773f3f8dc8b4be867a9a02-Abstract.html) | 659,896 | | [SearchQA](https://huggingface.co/datasets/search_qa) | [paper](https://arxiv.org/abs/1704.05179) | 582,261 | | [Eli5](https://huggingface.co/datasets/eli5) | [paper](https://doi.org/10.18653/v1/p19-1346) | 325,475 | | [Flickr 30k](https://shannon.cs.illinois.edu/DenotationGraph/) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/229/33) | 317,695 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles) | | 304,525 | | AllNLI ([SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) | [paper SNLI](https://doi.org/10.18653/v1/d15-1075), [paper MultiNLI](https://doi.org/10.18653/v1/n18-1101) | 277,230 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (bodies) | | 250,519 | | [Stack Exchange](https://huggingface.co/datasets/flax-sentence-embeddings/stackexchange_xml) Duplicate questions (titles+bodies) | | 250,460 | | [Sentence Compression](https://github.com/google-research-datasets/sentence-compression) | [paper](https://www.aclweb.org/anthology/D13-1155/) | 180,000 | | [Wikihow](https://github.com/pvl/wikihow_pairs_dataset) | [paper](https://arxiv.org/abs/1810.09305) | 128,542 | | [Altlex](https://github.com/chridey/altlex/) | [paper](https://aclanthology.org/P16-1135.pdf) | 112,696 | | [Quora Question Triplets](https://quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs) | - | 103,663 | | [Simple Wikipedia](https://cs.pomona.edu/~dkauchak/simplification/) | [paper](https://www.aclweb.org/anthology/P11-2117/) | 102,225 | | [Natural Questions (NQ)](https://ai.google.com/research/NaturalQuestions) | [paper](https://transacl.org/ojs/index.php/tacl/article/view/1455) | 100,231 | | [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/) | [paper](https://aclanthology.org/P18-2124.pdf) | 87,599 | | [TriviaQA](https://huggingface.co/datasets/trivia_qa) | - | 73,346 | | **Total** | | **1,170,060,424** |
daysunlab/llama-3-q8-daysunlab-part3-ch01-07
daysunlab
"2025-05-13T11:25:44"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-13T11:25:39"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mradermacher/InternVL2_5-38B-GGUF
mradermacher
"2025-05-13T11:23:11"
0
0
transformers
[ "transformers", "gguf", "internvl", "custom_code", "multilingual", "dataset:HuggingFaceFV/finevideo", "base_model:OpenGVLab/InternVL2_5-38B", "base_model:quantized:OpenGVLab/InternVL2_5-38B", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-13T10:46:23"
--- base_model: OpenGVLab/InternVL2_5-38B datasets: - HuggingFaceFV/finevideo language: - multilingual library_name: transformers license: mit quantized_by: mradermacher tags: - internvl - custom_code --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/OpenGVLab/InternVL2_5-38B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q2_K.gguf) | Q2_K | 12.4 | | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q3_K_S.gguf) | Q3_K_S | 14.5 | | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q3_K_L.gguf) | Q3_K_L | 17.3 | | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q6_K.gguf) | Q6_K | 27.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/InternVL2_5-38B-GGUF/resolve/main/InternVL2_5-38B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Adamyuyuyu/Yuyu
Adamyuyuyu
"2025-05-13T11:12:37"
0
0
null
[ "license:creativeml-openrail-m", "region:us" ]
null
"2025-05-13T11:12:37"
--- license: creativeml-openrail-m ---
Anwaarma/L8-finetune
Anwaarma
"2025-05-13T11:01:59"
17
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-3.2-1B", "base_model:adapter:meta-llama/Llama-3.2-1B", "license:llama3.2", "region:us" ]
null
"2025-04-26T12:48:26"
--- library_name: peft license: llama3.2 base_model: meta-llama/Llama-3.2-1B tags: - generated_from_trainer metrics: - f1 model-index: - name: L8-finetune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # L8-finetune This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0448 - F1: 0.8379 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.79e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.6205 | 1.0 | 3032 | 0.4559 | 0.8252 | | 0.1557 | 2.0 | 6064 | 0.5944 | 0.8469 | | 0.6488 | 3.0 | 9096 | 0.8875 | 0.8291 | | 0.3076 | 4.0 | 12128 | 0.9611 | 0.8366 | | 0.0548 | 5.0 | 15160 | 1.0448 | 0.8379 | ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 2.14.4 - Tokenizers 0.21.1
Denn231/internal_clf_v_0.53
Denn231
"2025-05-13T11:01:05"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-05-13T09:43:39"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoT69420/Phi4CroatianFinetuned
MoT69420
"2025-05-13T10:59:15"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/phi-4-unsloth-bnb-4bit", "base_model:finetune:unsloth/phi-4-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-12T18:06:48"
--- base_model: unsloth/phi-4-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MoT69420 - **License:** apache-2.0 - **Finetuned from model :** unsloth/phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
AKASH2393/mistral-finetuned
AKASH2393
"2025-05-13T10:57:14"
0
0
null
[ "pytorch", "tensorboard", "safetensors", "gguf", "mistral", "pretrained", "text-generation", "en", "arxiv:2310.06825", "license:apache-2.0", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-13T10:34:05"
--- language: - en license: apache-2.0 tags: - pretrained pipeline_tag: text-generation inference: parameters: temperature: 0.7 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- # Model Card for Mistral-7B-v0.1 The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested. For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/). ## Model Architecture Mistral-7B-v0.1 is a transformer model, with the following architecture choices: - Grouped-Query Attention - Sliding-Window Attention - Byte-fallback BPE tokenizer ## Troubleshooting - If you see the following error: ``` KeyError: 'mistral' ``` - Or: ``` NotImplementedError: Cannot copy out of meta tensor; no data! ``` Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer. ## Notice Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms. ## The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
qwer2991/andrzej3
qwer2991
"2025-05-13T10:51:59"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-13T10:03:18"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Andrzej3 --- # Andrzej3 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Andrzej3` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Andrzej3", "lora_weights": "https://huggingface.co/qwer2991/andrzej3/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('qwer2991/andrzej3', weight_name='lora.safetensors') image = pipeline('Andrzej3').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 32 ## Contribute your own examples You can use the [community tab](https://huggingface.co/qwer2991/andrzej3/discussions) to add images that show off what you’ve made with this LoRA.
Cube-ai000/Cube-o1
Cube-ai000
"2025-05-13T10:41:59"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-12T15:03:40"
--- license: apache-2.0 ---
Macropodus/macbert4mdcspell_v2
Macropodus
"2025-05-13T10:40:29"
0
1
null
[ "pytorch", "bert", "csc", "macro-correct", "pycorrector", "mdcspell", "macbert4mdcspell", "chinese-spelling-correct", "zh", "license:apache-2.0", "region:us" ]
null
"2025-05-13T10:29:53"
--- license: apache-2.0 language: - zh tags: - csc - macro-correct - pycorrector - mdcspell - macbert4mdcspell - chinese-spelling-correct --- <p align="center"> <img src="tet/images/csc_logo.png" width="480"> </p> # [macro-correct](https://github.com/yongzhuo/macro-correct) [![PyPI](https://img.shields.io/pypi/v/macro-correct)](https://pypi.org/project/macro-correct/) [![Build Status](https://travis-ci.com/yongzhuo/macro-correct.svg?branch=master)](https://travis-ci.com/yongzhuo/macro-correct) [![PyPI_downloads](https://img.shields.io/pypi/dm/macro-correct)](https://pypi.org/project/macro-correct/) [![Stars](https://img.shields.io/github/stars/yongzhuo/macro-correct?style=social)](https://github.com/yongzhuo/macro-correct/stargazers) [![Forks](https://img.shields.io/github/forks/yongzhuo/macro-correct.svg?style=social)](https://github.com/yongzhuo/macro-correct/network/members) [![Join the chat at https://gitter.im/yongzhuo/macro-correct](https://badges.gitter.im/yongzhuo/macro-correct.svg)](https://gitter.im/yongzhuo/macro-correct?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) >>> macro-correct, 文本纠错工具包(Text Correct), 支持中文拼写纠错/标点符号纠错(CSC, Chinese Spelling Correct / Check), CSC支持各领域数据(包括古文), 模型在大规模、各领域的、现代/当代语料上训练而得, 泛化性强. >>> macro-correct是一个只依赖pytorch、transformers、numpy、opencc的文本纠错(CSC, 中文拼写纠错; Punct, 中文标点纠错)工具包,专注于中文文本纠错的极简自然语言处理工具包。 使用大部分市面上的开源数据集构建生成的混淆集,使用人民日报语料&学习强国语料等生成1000万+训练数据集来训练模型; 支持MDCSpell、Macbert、ReLM、SoftBERT、BertCRF等多种经典模型; 支持中文拼写纠错、中文标点符号纠错、中文语法纠错(待续)、独立的检测模型/识别模型(待续); 具有依赖轻量、代码简洁、注释详细、调试清晰、配置灵活、拓展方便、适配NLP等特性。 ## 目录 * [安装](#安装) * [调用](#调用) * [体验](#体验) * [词典](#词典) * [详情](#详情) * [训练](#训练) * [测评](#测评) * [日志](#日志) * [参考](#参考) * [论文](#论文) * [Cite](#Cite) # 安装 ```bash pip install macro-correct # 清华镜像源 pip install -i https://pypi.tuna.tsinghua.edu.cn/simple macro-correct # 如果不行, 则不带依赖安装, 之后缺什么包再补充什么 pip install -i https://pypi.tuna.tsinghua.edu.cn/simple macro-correct --no-dependencies ``` # 调用 更多样例sample详情见/tet目录 - 使用example详见/tet/tet目录, 中文拼写纠错代码为tet_csc_token_zh.py, 中文标点符号纠错代码为tet_csc_punct_zh.py, CSC也可以直接用tet_csc_flag_transformers.py - 训练代码详见/tet/train目录, 可配置本地预训练模型地址和各种参数等; # 体验 [HF---Space---Macropodus/macbert4csc_v2](https://huggingface.co/spaces/Macropodus/macbert4csc_v2) <img src="tet/images/csc_demo.png" width="1024"> ## 2.调用-文本纠错 ### 2.1 CSC 使用 macro-bert ```python # !/usr/bin/python # -*- coding: utf-8 -*- # @time : 2021/2/29 21:41 # @author : Mo # @function: 文本纠错, 使用macro-correct import os os.environ["MACRO_CORRECT_FLAG_CSC_TOKEN"] = "1" from macro_correct import correct ### 默认纠错(list输入) text_list = ["真麻烦你了。希望你们好好的跳无", "少先队员因该为老人让坐", "机七学习是人工智能领遇最能体现智能的一个分知", "一只小鱼船浮在平净的河面上" ] text_csc = correct(text_list) print("默认纠错(list输入):") for res_i in text_csc: print(res_i) print("#" * 128) """ 默认纠错(list输入): {'index': 0, 'source': '真麻烦你了。希望你们好好的跳无', 'target': '真麻烦你了。希望你们好好地跳舞', 'errors': [['的', '地', 12, 0.6584], ['无', '舞', 14, 1.0]]} {'index': 1, 'source': '少先队员因该为老人让坐', 'target': '少先队员应该为老人让坐', 'errors': [['因', '应', 4, 0.995]]} {'index': 2, 'source': '机七学习是人工智能领遇最能体现智能的一个分知', 'target': '机器学习是人工智能领域最能体现智能的一个分支', 'errors': [['七', '器', 1, 0.9998], ['遇', '域', 10, 0.9999], ['知', '支', 21, 1.0]]} {'index': 3, 'source': '一只小鱼船浮在平净的河面上', 'target': '一只小鱼船浮在平静的河面上', 'errors': [['净', '静', 8, 0.9961]]} """ ``` ### 2.2 CSC 使用 transformers ```bash # !/usr/bin/python # -*- coding: utf-8 -*- # @time : 2021/2/29 21:41 # @author : Mo # @function: transformers直接加载bert类模型测试 import traceback import time import sys import os os.environ["USE_TORCH"] = "1" from transformers import BertConfig, BertTokenizer, BertForMaskedLM import torch # pretrained_model_name_or_path = "shibing624/macbert4csc-base-chinese" pretrained_model_name_or_path = "Macropodus/macbert4mdcspell_v2" # pretrained_model_name_or_path = "Macropodus/macbert4mdcspell_v1" # pretrained_model_name_or_path = "Macropodus/macbert4csc_v1" # pretrained_model_name_or_path = "Macropodus/macbert4csc_v2" # pretrained_model_name_or_path = "Macropodus/bert4csc_v1" device = torch.device("cuda" if torch.cuda.is_available() else "cpu") max_len = 128 print("load model, please wait a few minute!") tokenizer = BertTokenizer.from_pretrained(pretrained_model_name_or_path) bert_config = BertConfig.from_pretrained(pretrained_model_name_or_path) model = BertForMaskedLM.from_pretrained(pretrained_model_name_or_path) model.to(device) print("load model success!") texts = [ "机七学习是人工智能领遇最能体现智能的一个分知", "我是练习时长两念半的鸽仁练习生蔡徐坤", "真麻烦你了。希望你们好好的跳无", "他法语说的很好,的语也不错", "遇到一位很棒的奴生跟我疗天", "我们为这个目标努力不解", ] len_mid = min(max_len, max([len(t)+2 for t in texts])) with torch.no_grad(): outputs = model(**tokenizer(texts, padding=True, max_length=len_mid, return_tensors="pt").to(device)) def get_errors(source, target): """ 极简方法获取 errors """ len_min = min(len(source), len(target)) errors = [] for idx in range(len_min): if source[idx] != target[idx]: errors.append([source[idx], target[idx], idx]) return errors result = [] for probs, source in zip(outputs.logits, texts): ids = torch.argmax(probs, dim=-1) tokens_space = tokenizer.decode(ids[1:-1], skip_special_tokens=False) text_new = tokens_space.replace(" ", "") target = text_new[:len(source)] errors = get_errors(source, target) print(source, " => ", target, errors) result.append([target, errors]) print(result) """ 机七学习是人工智能领遇最能体现智能的一个分知 => 机器学习是人工智能领域最能体现智能的一个分支 [['七', '器', 1], ['遇', '域', 10], ['知', '支', 21]] 我是练习时长两念半的鸽仁练习生蔡徐坤 => 我是练习时长两年半的个人练习生蔡徐坤 [['念', '年', 7], ['鸽', '个', 10], ['仁', '人', 11]] 真麻烦你了。希望你们好好的跳无 => 真麻烦你了。希望你们好好地跳舞 [['的', '地', 12], ['无', '舞', 14]] 他法语说的很好,的语也不错 => 他法语说得很好,德语也不错 [['的', '得', 4], ['的', '德', 8]] 遇到一位很棒的奴生跟我疗天 => 遇到一位很棒的女生跟我聊天 [['奴', '女', 7], ['疗', '聊', 11]] 我们为这个目标努力不解 => 我们为这个目标努力不懈 [['解', '懈', 10]] """ ``` ## 3.调用-标点纠错 ```python import os os.environ["MACRO_CORRECT_FLAG_CSC_PUNCT"] = "1" from macro_correct import correct_punct ### 1.默认标点纠错(list输入) text_list = ["山不在高有仙则名。", "水不在深,有龙则灵", "斯是陋室惟吾德馨", "苔痕上阶绿草,色入帘青。" ] text_csc = correct_punct(text_list) print("默认标点纠错(list输入):") for res_i in text_csc: print(res_i) print("#" * 128) """ 默认标点纠错(list输入): {'index': 0, 'source': '山不在高有仙则名。', 'target': '山不在高,有仙则名。', 'score': 0.9917, 'errors': [['', ',', 4, 0.9917]]} {'index': 1, 'source': '水不在深,有龙则灵', 'target': '水不在深,有龙则灵。', 'score': 0.9995, 'errors': [['', '。', 9, 0.9995]]} {'index': 2, 'source': '斯是陋室惟吾德馨', 'target': '斯是陋室,惟吾德馨。', 'score': 0.9999, 'errors': [['', ',', 4, 0.9999], ['', '。', 8, 0.9998]]} {'index': 3, 'source': '苔痕上阶绿草,色入帘青。', 'target': '苔痕上阶绿,草色入帘青。', 'score': 0.9998, 'errors': [['', ',', 5, 0.9998]]} """ ``` # 词典 ## 默认混淆词典地址 * macro_correct/output/confusion_dict.json ## 操作混淆词典 ```python ## 自定义混淆词典 # !/usr/bin/python # -*- coding: utf-8 -*- # @time : 2021/2/29 21:41 # @author : Mo # @function: tet csc of token confusion dict, 混淆词典 import os os.environ["MACRO_CORRECT_FLAG_CSC_TOKEN"] = "1" from macro_correct.pytorch_textcorrection.tcTrie import ConfusionCorrect from macro_correct import MODEL_CSC_TOKEN from macro_correct import correct ### 默认使用混淆词典 user_dict = { "乐而往返": "乐而忘返", "金钢钻": "金刚钻", "藤罗蔓": "藤萝蔓", } text_list = [ "为什么乐而往返?", "没有金钢钻就不揽瓷活!", "你喜欢藤罗蔓吗?", "三周年祭日在哪举行?" ] text_csc = correct(text_list, flag_confusion=False) print("默认纠错(不带混淆词典):") for res_i in text_csc: print(res_i) print("#" * 128) text_csc = correct(text_list, flag_confusion=True) print("默认纠错(-带混淆词典-默认):") for res_i in text_csc: print(res_i) print("#" * 128) # ---混淆词典--- ### 只新增, 新增用户词典(默认混淆词典也使用) MODEL_CSC_TOKEN.model_csc.model_confusion = ConfusionCorrect(user_dict=user_dict) text_csc = correct(text_list, flag_confusion=True) print("默认纠错(-带混淆词典-新增):") for res_i in text_csc: print(res_i) print("#" * 128) ### 全覆盖, 只使用用户词典(默认混淆词典废弃) MODEL_CSC_TOKEN.model_csc.model_confusion = ConfusionCorrect(confusion_dict=user_dict) text_csc = correct(text_list, flag_confusion=True) print("默认纠错(-带混淆词典-全覆盖):") for res_i in text_csc: print(res_i) print("#" * 128) # ---混淆词典文件--- ### 只新增, 新增用户词典(默认混淆词典也使用), path不为空即可; json文件, {混淆词语:正确词语} key-value; 详见macro-correct/tet/tet/tet_csc_token_confusion.py path_user = "./user_confusion_dict.json" MODEL_CSC_TOKEN.model_csc.model_confusion = ConfusionCorrect(path="1", path_user=path_user) text_csc = correct(text_list, flag_confusion=True) print("默认纠错(-带混淆词典文件-新增):") for res_i in text_csc: print(res_i) print("#" * 128) ### 全覆盖, 只使用用户词典(默认混淆词典废弃); path必须传空字符串 MODEL_CSC_TOKEN.model_csc.model_confusion = ConfusionCorrect(path="", path_user=path_user) text_csc = correct(text_list, flag_confusion=True) print("默认纠错(-带混淆词典文件-全覆盖):") for res_i in text_csc: print(res_i) print("#" * 128) """ 默认纠错(不带混淆词典): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而往返?', 'errors': []} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 0.6587]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 0.8582]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年祭日在哪举行?', 'errors': []} ################################################################################################################################ 默认纠错(-带混淆词典-默认): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而往返?', 'errors': []} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 1.0]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 0.8582]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年忌日在哪举行?', 'errors': [['祭', '忌', 3, 1.0]]} ################################################################################################################################ 默认纠错(-带混淆词典-新增): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而忘返?', 'errors': [['往', '忘', 5, 1.0]]} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 1.0]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 1.0]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年忌日在哪举行?', 'errors': [['祭', '忌', 3, 1.0]]} ################################################################################################################################ 默认纠错(-带混淆词典-全覆盖): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而忘返?', 'errors': [['往', '忘', 5, 1.0]]} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 1.0]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 1.0]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年祭日在哪举行?', 'errors': []} ################################################################################################################################ 默认纠错(-带混淆词典文件-新增): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而忘返?', 'errors': [['往', '忘', 5, 1.0]]} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 1.0]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 1.0]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年忌日在哪举行?', 'errors': [['祭', '忌', 3, 1.0]]} ################################################################################################################################ 默认纠错(-带混淆词典文件-全覆盖): {'index': 0, 'source': '为什么乐而往返?', 'target': '为什么乐而忘返?', 'errors': [['往', '忘', 5, 1.0]]} {'index': 1, 'source': '没有金钢钻就不揽瓷活!', 'target': '没有金刚钻就不揽瓷活!', 'errors': [['钢', '刚', 3, 1.0]]} {'index': 2, 'source': '你喜欢藤罗蔓吗?', 'target': '你喜欢藤萝蔓吗?', 'errors': [['罗', '萝', 4, 1.0]]} {'index': 3, 'source': '三周年祭日在哪举行?', 'target': '三周年祭日在哪举行?', 'errors': []} ################################################################################################################################ """ ``` # 详情 ## CSC调用(超参数说明) ```python import os os.environ["MACRO_CORRECT_FLAG_CSC_TOKEN"] = "1" from macro_correct import correct ### 默认纠错(list输入) text_list = ["真麻烦你了。希望你们好好的跳无", "少先队员因该为老人让坐", "机七学习是人工智能领遇最能体现智能的一个分知", "一只小鱼船浮在平净的河面上" ] ### 默认纠错(list输入, 参数配置) params = { "threshold": 0.55, # token阈值过滤 "batch_size": 32, # 批大小 "max_len": 128, # 自定义的长度, 如果截断了, 则截断部分不参与纠错, 后续直接一模一样的补回来 "rounded": 4, # 保存4位小数 "flag_confusion": True, # 是否使用默认的混淆词典 "flag_prob": True, # 是否返回纠错token处的概率 } text_csc = correct(text_list, **params) print("默认纠错(list输入, 参数配置):") for res_i in text_csc: print(res_i) print("#" * 128) """ 默认纠错(list输入): {'index': 0, 'source': '真麻烦你了。希望你们好好的跳无', 'target': '真麻烦你了。希望你们好好地跳舞', 'errors': [['的', '地', 12, 0.6584], ['无', '舞', 14, 1.0]]} {'index': 1, 'source': '少先队员因该为老人让坐', 'target': '少先队员应该为老人让坐', 'errors': [['因', '应', 4, 0.995]]} {'index': 2, 'source': '机七学习是人工智能领遇最能体现智能的一个分知', 'target': '机器学习是人工智能领域最能体现智能的一个分支', 'errors': [['七', '器', 1, 0.9998], ['遇', '域', 10, 0.9999], ['知', '支', 21, 1.0]]} {'index': 3, 'source': '一只小鱼船浮在平净的河面上', 'target': '一只小鱼船浮在平静的河面上', 'errors': [['净', '静', 8, 0.9961]]} """ ``` ## PUNCT调用(超参数说明) ```python import os os.environ["MACRO_CORRECT_FLAG_CSC_PUNCT"] = "1" from macro_correct import correct_punct ### 1.默认标点纠错(list输入) text_list = ["山不在高有仙则名。", "水不在深,有龙则灵", "斯是陋室惟吾德馨", "苔痕上阶绿草,色入帘青。" ] ### 2.默认标点纠错(list输入, 参数配置详情) params = { "limit_num_errors": 4, # 一句话最多的错别字, 多的就剔除 "limit_len_char": 4, # 一句话的最小字符数 "threshold_zh": 0.5, # 句子阈值, 中文字符占比的最低值 "threshold": 0.55, # token阈值过滤 "batch_size": 32, # 批大小 "max_len": 128, # 自定义的长度, 如果截断了, 则截断部分不参与纠错, 后续直接一模一样的补回来 "rounded": 4, # 保存4位小数 "flag_prob": True, # 是否返回纠错token处的概率 } text_csc = correct_punct(text_list, **params) print("默认标点纠错(list输入):") for res_i in text_csc: print(res_i) print("#" * 128) """ 默认标点纠错(list输入): {'index': 0, 'source': '山不在高有仙则名。', 'target': '山不在高,有仙则名。', 'score': 0.9917, 'errors': [['', ',', 4, 0.9917]]} {'index': 1, 'source': '水不在深,有龙则灵', 'target': '水不在深,有龙则灵。', 'score': 0.9995, 'errors': [['', '。', 9, 0.9995]]} {'index': 2, 'source': '斯是陋室惟吾德馨', 'target': '斯是陋室,惟吾德馨。', 'score': 0.9999, 'errors': [['', ',', 4, 0.9999], ['', '。', 8, 0.9998]]} {'index': 3, 'source': '苔痕上阶绿草,色入帘青。', 'target': '苔痕上阶绿,草色入帘青。', 'score': 0.9998, 'errors': [['', ',', 5, 0.9998]]} """ ``` # 训练 ## CSC任务 ### 目录地址 * macbert4mdcspell: macro_correct/pytorch_user_models/csc/macbert4mdcspell/train_yield.py * macbert4csc: macro_correct/pytorch_user_models/csc/macbert4csc/train_yield.py * relm: macro_correct/pytorch_user_models/csc/relm/train_yield.py ### 数据准备 * espell: list<dict>的json文件结构, 带"original_text"和"correct_text"就好, 参考macro_correct/corpus/text_correction/espell ``` [ { "original_text": "遇到逆竟时,我们必须勇于面对,而且要愈挫愈勇,这样我们才能朝著成功之路前进。", "correct_text": "遇到逆境时,我们必须勇于面对,而且要愈挫愈勇,这样我们才能朝著成功之路前进。", } ] ``` * sighan: list<dict>的json文件结构, 带"source"和"target"就好, 参考macro_correct/corpus/text_correction/sighan ``` [ { "source": "若被告人正在劳动教养,则可以通过劳动教养单位转交", "target": "若被告人正在劳动教养,则可以通过劳动教养单位转交", } ] ``` ### 配置-训练-验证-预测 #### 配置 配置好数据地址和超参, 参考macro_correct/pytorch_user_models/csc/macbert4mdcspell/config.py #### 训练-验证-预测 ``` 训练 nohup python train_yield.py > tc.train_yield.py.log 2>&1 & tail -n 1000 -f tc.train_yield.py.log 验证 python eval_std.py 预测 python predict.py ``` ## PUNCT任务 ### 目录地址 * PUNCT: macro_correct/pytorch_sequencelabeling/slRun.py ### 数据准备 * SPAN格式: NER任务, 默认用span格式(jsonl), 参考macro_correct/corpus/sequence_labeling/chinese_symbol的chinese_symbol.dev.span文件 ``` {'label': [{'type': '0', 'ent': '下', 'pos': [7, 7]}, {'type': '1', 'ent': '林', 'pos': [14, 14]}], 'text': '#桂林山水甲天下阳朔山水甲桂林'} {'label': [{'type': '11', 'ent': 'o', 'pos': [5, 5]}, {'type': '0', 'ent': 't', 'pos': [12, 12]}, {'type': '1', 'ent': '包', 'pos': [19, 19]}], 'text': '#macrocorrect文本纠错工具包'} ``` * CONLL格式: 生成SPAN格式后, 用macro_correct/tet/corpus/pos_to_conll.py转换一下就好 ``` 神 O 秘 O 宝 O 藏 B-1 在 O 旅 O 途 O 中 B-0 他 O ``` ### 配置-训练-验证-预测 #### 配置 配置好数据地址和超参, 参考macro_correct/pytorch_user_models/csc/macbert4mdcspell/config.py #### 训练-验证-预测 ``` 训练 nohup python train_yield.py > tc.train_yield.py.log 2>&1 & tail -n 1000 -f tc.train_yield.py.log 验证 python eval_std.py 预测 python predict.py ``` # 测评 ## 说明 * 所有训练数据均来自公网或开源数据, 训练数据为1千万左右, 混淆词典较大; * 所有测试数据均来自公网或开源数据, 测评数据地址为[Macropodus/csc_eval_public](https://huggingface.co/datasets/Macropodus/csc_eval_public); * 测评代码主要为[tcEval.py](https://github.com/yongzhuo/macro-correct/macro_correct/pytorch_textcorrection/tcEval.py); 其中[qwen25_1-5b_pycorrector]()的测评代码在目录[eval](https://github.com/yongzhuo/macro-correct/tet/eval) * 评估标准:过纠率(过度纠错, 即高质量正确句子的错误纠正); 句子级宽松标准的准确率/精确率/召回率/F1(同[shibing624/pycorrector](https://github.com/shibing624/pycorrector)); 句子级严格标准的准确率/精确率/召回率/F1(同[wangwang110/CSC](https://github.com/wangwang110/CSC)); 字符级别的准确率/精确率/召回率/F1(错别字); * qwen25_1-5b_pycorrector权重地址在[shibing624/chinese-text-correction-1.5b](https://huggingface.co/shibing624/chinese-text-correction-1.5b) * macbert4csc_pycorrector权重地址在[shibing624/macbert4csc-base-chinese](https://huggingface.co/shibing624/macbert4csc-base-chinese); * macbert4mdcspell_v1权重地址在[Macropodus/macbert4mdcspell_v1](https://huggingface.co/Macropodus/macbert4mdcspell_v1); * macbert4mdcspell_v2权重地址在[Macropodus/macbert4mdcspell_v2](https://huggingface.co/Macropodus/macbert4mdcspell_v2); * macbert4csc_v2权重地址在[Macropodus/macbert4csc_v2](https://huggingface.co/Macropodus/macbert4csc_v2); * macbert4csc_v1权重地址在[Macropodus/macbert4csc_v1](https://huggingface.co/Macropodus/macbert4csc_v1); * bert4csc_v1权重地址在[Macropodus/bert4csc_v1](https://huggingface.co/Macropodus/bert4csc_v1); ## 3.1 测评数据 ``` 1.gen_de3.json(5545): '的地得'纠错, 由人民日报/学习强国/chinese-poetry等高质量数据人工生成; 2.lemon_v2.tet.json(1053): relm论文提出的数据, 多领域拼写纠错数据集(7个领域), ; 包括game(GAM), encyclopedia (ENC), contract (COT), medical care(MEC), car (CAR), novel (NOV), and news (NEW)等领域; 3.acc_rmrb.tet.json(4636): 来自NER-199801(人民日报高质量语料); 4.acc_xxqg.tet.json(5000): 来自学习强国网站的高质量语料; 5.gen_passage.tet.json(10000): 源数据为qwen生成的好词好句, 由几乎所有的开源数据汇总的混淆词典生成; 6.textproof.tet.json(1447): NLP竞赛数据, TextProofreadingCompetition; 7.gen_xxqg.tet.json(5000): 源数据为学习强国网站的高质量语料, 由几乎所有的开源数据汇总的混淆词典生成; 8.faspell.dev.json(1000): 视频字幕通过OCR后获取的数据集; 来自爱奇艺的论文faspell; 9.lomo_tet.json(5000): 主要为音似中文拼写纠错数据集; 来自腾讯; 人工标注的数据集CSCD-NS; 10.mcsc_tet.5000.json(5000): 医学拼写纠错; 来自腾讯医典APP的真实历史日志; 注意论文说该数据集只关注医学实体的纠错, 常用字等的纠错并不关注; 11.ecspell.dev.json(1500): 来自ECSpell论文, 包括(law/med/gov)等三个领域; 12.sighan2013.dev.json(1000): 来自sighan13会议; 13.sighan2014.dev.json(1062): 来自sighan14会议; 14.sighan2015.dev.json(1100): 来自sighan15会议; ``` ## 3.2 测评再说明 ``` 1.数据预处理, 测评数据都经过 全角转半角,繁简转化,标点符号标准化等操作; 2.指标带common的极为宽松指标, 同开源项目pycorrector的评估指标; 3.指标带strict的极为严格指标, 同开源项目[wangwang110/CSC](https://github.com/wangwang110/CSC); 4.macbert4mdcspell_v1/v2模型为训练使用mdcspell架构+bert的mlm-loss, 但是推理的时候只用bert-mlm; 5.acc_rmrb/acc_xxqg数据集没有错误, 用于评估模型的误纠率(过度纠错); 6.qwen25_1-5b_pycorrector的模型为shibing624/chinese-text-correction-1.5b, 其训练数据包括了lemon_v2/mcsc_tet/ecspell的验证集和测试集, 其他的bert类模型的训练不包括验证集和测试集; ``` ## 3.3 测评结果 ### 3.3.1 F1(common_cor_f1) | model/common_cor_f1 | avg| gen_de3| lemon_v2| gen_passage| text_proof| gen_xxqg| faspell| lomo_tet| mcsc_tet| ecspell| sighan2013| sighan2014| sighan2015 | |:------------------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------| | macbert4csc_pycorrector | 45.8| 42.44| 42.89| 31.49| 46.31| 26.06| 32.7| 44.83| 27.93| 55.51| 70.89| 61.72| 66.81 | | qwen25_1-5b_pycorrector | 45.11| 27.29| 89.48| 14.61| 83.9| 13.84| 18.2| 36.71| 96.29| 88.2| 36.41| 15.64| 20.73 | | bert4csc_v1 | 62.28| 93.73| 61.99| 44.79| 68.0| 35.03| 48.28| 61.8| 64.41| 79.11| 77.66| 51.01| 61.54 | | macbert4csc_v1 | 68.55| 96.67| 65.63| 48.4| 75.65| 38.43| 51.76| 70.11| 80.63| 85.55| 81.38| 57.63| 70.7 | | macbert4csc_v2 | 68.6| 96.74| 66.02| 48.26| 75.78| 38.84| 51.91| 70.17| 80.71| 85.61| 80.97| 58.22| 69.95 | | macbert4mdcspell_v1 | 71.1| 96.42| 70.06| 52.55| 79.61| 43.37| 53.85| 70.9| 82.38| 87.46| 84.2| 61.08| 71.32 | | macbert4mdcspell_v2 | 71.23| 96.42| 65.8| 52.35| 75.94| 43.5| 53.82| 72.66| 82.28| 88.69| 82.51| 65.59| 75.26 | ### 3.3.2 acc(common_cor_acc) | model/common_cor_acc| avg| gen_de3| lemon_v2| gen_passage| text_proof| gen_xxqg| faspell| lomo_tet| mcsc_tet| ecspell| sighan2013| sighan2014| sighan2015 | |:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------|:-----------------| | macbert4csc_pycorrector| 48.26| 26.96| 28.68| 34.16| 55.29| 28.38| 22.2| 60.96| 57.16| 67.73| 55.9| 68.93| 72.73 | | qwen25_1-5b_pycorrector| 46.09| 15.82| 81.29| 22.96| 82.17| 19.04| 12.8| 50.2| 96.4| 89.13| 22.8| 27.87| 32.55 | | bert4csc_v1| 60.76| 88.21| 45.96| 43.13| 68.97| 35.0| 34.0| 65.86| 73.26| 81.8| 64.5| 61.11| 67.27 | | macbert4csc_v1| 65.34| 93.56| 49.76| 44.98| 74.64| 36.1| 37.0| 73.0| 83.6| 86.87| 69.2| 62.62| 72.73 | | macbert4csc_v2| 65.22| 93.69| 50.14| 44.92| 74.64| 36.26| 37.0| 72.72| 83.66| 86.93| 68.5| 62.43| 71.73 | | macbert4mdcspell_v1| 67.15| 93.09| 54.8| 47.71| 78.09| 39.52| 38.8| 71.92| 84.78| 88.27| 73.2| 63.28| 72.36 | | macbert4mdcspell_v2 | 68.31| 93.09| 50.05| 48.72| 75.74| 40.52| 38.9| 76.9| 84.8| 89.73| 71.0| 71.94| 78.36 | ### 3.3.3 acc(acc_true, thr=0.75) | model/acc | avg| acc_rmrb| acc_xxqg | |:------------------------|:-----------------|:-----------------|:-----------------| | macbert4csc_pycorrector | 99.24| 99.22| 99.26 | | qwen25_1-5b_pycorrector | 82.0| 77.14| 86.86 | | bert4csc_v1 | 98.71| 98.36| 99.06 | | macbert4csc_v1 | 97.72| 96.72| 98.72 | | macbert4csc_v2 | 97.89| 96.98| 98.8 | | macbert4mdcspell_v1 | 97.75| 96.51| 98.98 | | macbert4mdcspell_v2 | 99.54| 99.22| 99.86 | ### 3.3.4 结论(Conclusion) ``` 1.macbert4csc_v1/macbert4csc_v2/macbert4mdcspell_v1等模型使用多种领域数据训练, 比较均衡, 也适合作为第一步的预训练模型, 可用于专有领域数据的继续微调; 2.比较macbert4csc_pycorrector/bertbase4csc_v1/macbert4csc_v2/macbert4mdcspell_v1, 观察表2.3, 可以发现训练数据越多, 准确率提升的同时, 误纠率也会稍微高一些; 3.MFT(Mask-Correct)依旧有效, 不过对于数据量足够的情形提升不明显, 可能也是误纠率升高的一个重要原因; 4.训练数据中也存在文言文数据, 训练好的模型也支持文言文纠错; 5.训练好的模型对"地得的"等高频错误具有较高的识别率和纠错率; 6.macbert4mdcspell_v2的MFT只70%的时间no-error-mask(0.15), 15%的时间target-to-target, 15%的时间不mask; ``` # 日志 ``` 1. v20240129, 完成csc_punct模块; 2. v20241001, 完成csc_token模块; 3. v20250117, 完成csc_eval模块; 4. v20250501, 完成macbert4mdcspell_v2 ``` # 参考 This library is inspired by and references following frameworks and papers. * Chinese-text-correction-papers: [nghuyong/Chinese-text-correction-papers](https://github.com/nghuyong/Chinese-text-correction-papers) * pycorrector: [shibing624/pycorrector](https://github.com/shibing624/pycorrector) * CTCResources: [destwang/CTCResources](https://github.com/destwang/CTCResources) * CSC: [wangwang110/CSC](https://github.com/wangwang110/CSC) * char-similar: [yongzhuo/char-similar](https://github.com/yongzhuo/char-similar) * MDCSpell: [iioSnail/MDCSpell_pytorch](https://github.com/iioSnail/MDCSpell_pytorch) * CSCD-NS: [nghuyong/cscd-ns](https://github.com/nghuyong/cscd-ns) * lemon: [gingasan/lemon](https://github.com/gingasan/lemon) * ReLM: [Claude-Liu/ReLM](https://github.com/Claude-Liu/ReLM) # 论文 ## 中文拼写纠错(CSC, Chinese Spelling Correction) * 共收录34篇论文, 写了一个简短的综述. 详见[README.csc_survey.md](https://github.com/yongzhuo/macro-correct/blob/master/README.csc_survey.md) # Cite For citing this work, you can refer to the present GitHub project. For example, with BibTeX: ``` @software{macro-correct, url = {https://github.com/yongzhuo/macro-correct}, author = {Yongzhuo Mo}, title = {macro-correct}, year = {2025} ```
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
1,816

Collection including librarian-bots/model_cards_with_metadata