modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
leedheo/qlora-koalpaca-polyglot-12.8b-5kstep
leedheo
2024-01-24T10:06:22Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-24T10:06:16Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
priyanshug0405/my_awesome_wnut_model
priyanshug0405
2024-01-24T10:06:22Z
92
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-23T12:58:37Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: my_awesome_wnut_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_wnut_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0581 - Precision: 0.9128 - Recall: 0.9097 - F1: 0.9112 - Accuracy: 0.9802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0802 | 1.0 | 3750 | 0.0687 | 0.8897 | 0.8952 | 0.8924 | 0.9760 | | 0.0519 | 2.0 | 7500 | 0.0581 | 0.9128 | 0.9097 | 0.9112 | 0.9802 | | 0.0342 | 3.0 | 11250 | 0.0593 | 0.9174 | 0.9172 | 0.9173 | 0.9815 | | 0.0253 | 4.0 | 15000 | 0.0634 | 0.9204 | 0.9200 | 0.9202 | 0.9818 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Skier8402/GPT2_like_tokenizer
Skier8402
2024-01-24T10:05:10Z
0
0
transformers
[ "transformers", "gpt2", "BPT", "NLP", "HFcourse", "en", "dataset:wikitext", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-24T09:52:02Z
--- license: apache-2.0 datasets: - wikitext language: - en library_name: transformers tags: - gpt2 - BPT - NLP - HFcourse ---
japinder007/distilbert-base-uncased-finetuned-emotion
japinder007
2024-01-24T10:03:20Z
93
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-24T10:03:03Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9215 - name: F1 type: f1 value: 0.9213621929009119 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2239 - Accuracy: 0.9215 - F1: 0.9214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8334 | 1.0 | 250 | 0.3313 | 0.9035 | 0.9017 | | 0.2542 | 2.0 | 500 | 0.2239 | 0.9215 | 0.9214 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
priyanshug0405/my_awesome_wnut_model_2
priyanshug0405
2024-01-24T10:02:25Z
44
0
transformers
[ "transformers", "tf", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-24T09:14:50Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_keras_callback model-index: - name: priyanshug0405/my_awesome_wnut_model_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # priyanshug0405/my_awesome_wnut_model_2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0378 - Validation Loss: 0.0597 - Train Precision: 0.9157 - Train Recall: 0.9151 - Train F1: 0.9154 - Train Accuracy: 0.9807 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11250, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1337 | 0.0709 | 0.8861 | 0.8913 | 0.8887 | 0.9753 | 0 | | 0.0562 | 0.0600 | 0.9113 | 0.9107 | 0.9110 | 0.9795 | 1 | | 0.0378 | 0.0597 | 0.9157 | 0.9151 | 0.9154 | 0.9807 | 2 | ### Framework versions - Transformers 4.37.0 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
soniarocca31/secondo_modello
soniarocca31
2024-01-24T09:55:52Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-multilingual-cased", "base_model:finetune:distilbert/distilbert-base-multilingual-cased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-16T11:14:33Z
--- license: apache-2.0 base_model: distilbert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy model-index: - name: secondo_modello results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # secondo_modello This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1110 - Accuracy: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0039 | 2.66 | 500 | 0.1290 | 0.9717 | | 0.0009 | 5.32 | 1000 | 0.1203 | 0.9733 | | 0.0007 | 7.98 | 1500 | 0.1110 | 0.9783 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
cpgrant/mistral-7b-text-to-sql
cpgrant
2024-01-24T09:48:07Z
5
0
peft
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-24T08:54:21Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-v0.1 model-index: - name: mistral-7b-text-to-sql results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-7b-text-to-sql This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
jsfs11/West-Dare-7B-GGUF
jsfs11
2024-01-24T09:48:06Z
1
1
null
[ "gguf", "merge", "mergekit", "lazymergekit", "senseable/Westlake-7B", "abideen/DareVox-7B", "base_model:abideen/DareVox-7B", "base_model:merge:abideen/DareVox-7B", "base_model:senseable/Westlake-7B", "base_model:merge:senseable/Westlake-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-01-23T03:11:07Z
--- tags: - merge - mergekit - lazymergekit - senseable/Westlake-7B - abideen/DareVox-7B base_model: - senseable/Westlake-7B - abideen/DareVox-7B license: apache-2.0 --- # West-Dare-7B West-Dare-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [senseable/Westlake-7B](https://huggingface.co/senseable/Westlake-7B) * [abideen/DareVox-7B](https://huggingface.co/abideen/DareVox-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model models: - model: senseable/Westlake-7B parameters: density: 0.5 weight: 0.5 - model: abideen/DareVox-7B parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: true dtype: float16 ``` Credit to Maxime Labonne and his excellent blog https://mlabonne.github.io/blog/
Rocwo/Mistral-7B.F16-Instruct-v0.2-GGUF
Rocwo
2024-01-24T09:41:49Z
11
0
null
[ "gguf", "text-generation", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T13:44:05Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 model_creator: Mistral AI model_name: Mistral 7B Instruct v0.2 model_type: mistral pipeline_tag: text-generation prompt_template: '<s>[INST]{prompt} [/INST] ' quantized_by: Rocwo license: apache-2.0 --- <!-- header start --> <!-- 200823 --> ## Description Quantified [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) into F16 and converted to LLaMa CPP (GGUF) format. ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. <!-- prompt-template start --> ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST] ``` <!-- prompt-template end -->
ayeshgk/codet5-small-ft-v8-cpatd-ft-v8-cpat_dv5
ayeshgk
2024-01-24T09:41:11Z
91
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:ayeshgk/codet5-small-ft-v8-cpatd", "base_model:finetune:ayeshgk/codet5-small-ft-v8-cpatd", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-23T17:29:23Z
--- license: apache-2.0 base_model: ayeshgk/codet5-small-ft-v8-cpatd tags: - generated_from_trainer metrics: - rouge model-index: - name: codet5-small-ft-v8-cpatd-ft-v8-cpat_dv5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codet5-small-ft-v8-cpatd-ft-v8-cpat_dv5 This model is a fine-tuned version of [ayeshgk/codet5-small-ft-v8-cpatd](https://huggingface.co/ayeshgk/codet5-small-ft-v8-cpatd) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1715 - Rouge1: 86.4622 - Rouge2: 76.541 - Rougel: 85.4053 - Rougelsum: 85.4218 - Gen Len: 13.7143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 20 | 0.3355 | 82.9142 | 68.2812 | 81.8458 | 81.8291 | 13.3857 | | No log | 2.0 | 40 | 0.2333 | 84.3779 | 72.1358 | 83.5562 | 83.5825 | 13.3571 | | No log | 3.0 | 60 | 0.1830 | 86.0988 | 75.5643 | 85.1853 | 85.2291 | 13.6143 | | No log | 4.0 | 80 | 0.1715 | 86.4622 | 76.541 | 85.4053 | 85.4218 | 13.7143 | ### Framework versions - Transformers 4.38.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Rocwo/Mistral-7B.Q8_0-Instruct-v0.2-GGUF
Rocwo
2024-01-24T09:41:05Z
8
0
null
[ "gguf", "text-generation", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:quantized:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
2024-01-17T13:26:34Z
--- base_model: mistralai/Mistral-7B-Instruct-v0.2 model_creator: Mistral AI model_name: Mistral 7B Instruct v0.2 model_type: mistral pipeline_tag: text-generation prompt_template: '<s>[INST]{prompt} [/INST] ' quantized_by: Rocwo license: apache-2.0 --- <!-- header start --> <!-- 200823 --> ## Description Quantified [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) into Q8_0 and converted to LLaMa CPP (GGUF) format. ### About GGUF GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. ## Prompt template: ChatML <!-- prompt-template start --> ## Prompt template: Mistral ``` <s>[INST] {prompt} [/INST] ``` <!-- prompt-template end -->
pborchert/BusinessBERT
pborchert
2024-01-24T09:39:15Z
690
14
transformers
[ "transformers", "pytorch", "bert", "business", "finance", "industry-classification", "fill-mask", "en", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-12T09:30:17Z
--- license: cc-by-4.0 language: - en tags: - business - finance - industry-classification pipeline_tag: fill-mask widget: - text: "Sanofi is in the [MASK] industry." - text: "The current ratio measures [MASK]." --- # BusinessBERT An industry-sensitive language model for business applications pretrained on business communication corpora. The model incorporates industry classification (IC) as a pretraining objective besides masked language modeling (MLM). It was introduced in [this paper](https://www.sciencedirect.com/science/article/pii/S0377221724000444) and released in [this repository](https://github.com/pnborchert/BusinessBERT). ## Model description We introduce BusinessBERT, an industry-sensitive language model for business applications. The advantage of the model is the training approach focused on incorporating industry information relevant for business related natural language processing (NLP) tasks. We compile three large-scale textual corpora consisting of annual disclosures, company website content and scientific literature representing business communication. In total, the corpora include 2.23 billion token. BusinessBERT builds upon the bidirectional encoder representations from transformer architecture (BERT) and embeds industry information during pretraining in two ways: (1) The business communication corpora contain a variety of industry-specific terminology; (2) We employ industry classification (IC) as an additional pretraining objective for text documents originating from companies. ## Intended uses & limitations The model is intended to be fine-tuned on business related NLP tasks, i.e. sequence classification, named entity recognition, sentiment analysis or question answering. ## Training data - [CompanyWeb](https://huggingface.co/datasets/pborchert/CompanyWeb): 0.77 billion token, 3.5 GB raw text file - [MD&A Disclosures](https://data.caltech.edu/records/1249): 1.06 billion token, 5.1 GB raw text file - [Semantic Scholar Open Research Corpus](https://api.semanticscholar.org/corpus): 0.40 billion token, 1.9 GB raw text file ## Evaluation results Classification Tasks: | Task | Financial Risk (F1/Acc) | News Headline Topic (F1/Acc) | |:----:|:-----------:|:----:| | | 85.89/87.02 | 75.06/67.71 | Named Entity Recognition: | Task | SEC Filings (F1/Prec/Rec) | |:----:|:-----------:| | | 79.82/77.45/83.38 | Sentiment Analysis: | Task | FiQA (MSE/MAE) | Financial Phrasebank (F1/Acc) | StockTweets (F1/Acc) | |:----:|:-----------:|:----:| :----:| | | 0.0758/0.202 | 75.06/67.71 | 69.14/69.54 | Question Answering: | Task | FinQA (Exe Acc/Prog Acc) | |:----:|:-----------:| | | 60.07/57.19 | ### BibTeX entry and citation info ```bibtex @article{BORCHERT2024, title = {Industry-sensitive language modeling for business}, journal = {European Journal of Operational Research}, year = {2024}, issn = {0377-2217}, doi = {https://doi.org/10.1016/j.ejor.2024.01.023}, url = {https://www.sciencedirect.com/science/article/pii/S0377221724000444}, author = {Philipp Borchert and Kristof Coussement and Jochen {De Weerdt} and Arno {De Caigny}}, } ```
avemio-digital/SauerSci_Merge
avemio-digital
2024-01-24T09:36:59Z
7
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:avemio-digital/SauerkrautLM_chat_merge", "base_model:merge:avemio-digital/SauerkrautLM_chat_merge", "base_model:avemio-digital/lora_model_scipy_merged", "base_model:merge:avemio-digital/lora_model_scipy_merged", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T09:16:12Z
--- base_model: - avemio-digital/lora_model_scipy_merged - avemio-digital/SauerkrautLM_chat_merge tags: - mergekit - merge --- # mergedmodel This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [avemio-digital/lora_model_scipy_merged](https://huggingface.co/avemio-digital/lora_model_scipy_merged) * [avemio-digital/SauerkrautLM_chat_merge](https://huggingface.co/avemio-digital/SauerkrautLM_chat_merge) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: avemio-digital/lora_model_scipy_merged layer_range: [0, 32] - model: avemio-digital/SauerkrautLM_chat_merge layer_range: [0, 32] merge_method: slerp base_model: avemio-digital/SauerkrautLM_chat_merge parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
varun-v-rao/t5-large-snli
varun-v-rao
2024-01-24T09:21:47Z
4
0
transformers
[ "transformers", "tensorboard", "safetensors", "t5", "text-classification", "generated_from_trainer", "base_model:google-t5/t5-large", "base_model:finetune:google-t5/t5-large", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
2024-01-21T06:11:37Z
--- license: apache-2.0 base_model: t5-large tags: - generated_from_trainer metrics: - accuracy model-index: - name: t5-large-snli results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-large-snli This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2221 - Accuracy: 0.9268 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2842 | 1.0 | 4292 | 0.2240 | 0.9224 | | 0.2442 | 2.0 | 8584 | 0.2144 | 0.9255 | | 0.2234 | 3.0 | 12876 | 0.2221 | 0.9268 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.0.1+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0
Florent-COMPAGNONI/esgi-nlp-tp4-virtual_assistant_pipeline
Florent-COMPAGNONI
2024-01-24T09:20:11Z
90
1
transformers
[ "transformers", "safetensors", "roberta", "token-classification", "fr", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
2024-01-12T16:14:55Z
--- language: - fr --- # NLP - TD4 Ce repository contient le pipeline réaliser pour la partie 2 du TD 4. Il contient un modèle NER reconnaissant les "person" et "content", il a été obtenu en réalisant un transfer learning sur `roberta-base`. Le post-processing transforme les predictions du modèle dans un api call, du type: ```json { "job": "send_message", "receiver": [person in sentence], "content": [content in sentence], } ``` ## Exemple d'utilisation ```python from transformers import pipeline pipe = pipeline("api-call", model="Florent-COMPAGNONI/esgi-nlp-tp4-virtual_assistant_pipeline", trust_remote_code=True) pipe("Create a note for Dan for 6pm I mean 7pm that says food is on the table.,") ``` output ```bash { 'job': 'send_message', 'person': ['Dan'], 'content': ['food', 'is', 'on', 'the', 'table'] } ``` Le code relatif au pipeline se trouve dans le fichiers `virtual_assistant_pipeline.py`. Les notebooks retraçant l'entainement du modèle et la création du pipeline se trouve dans le dossier `notebooks/`.
meetkai/functionary-small-v2.2-GGUF
meetkai
2024-01-24T09:09:12Z
185
14
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-11T16:54:45Z
# Model Card for functionary-small-v2.2-GGUF [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-medium-v2.2-GGUF
meetkai
2024-01-24T09:08:29Z
42
12
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2024-01-12T01:27:38Z
# Model Card for functionary-medium-v2.2-GGUF [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
meetkai/functionary-7b-v2-GGUF
meetkai
2024-01-24T09:06:31Z
46
7
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
2023-12-07T15:51:25Z
# Model Card for functionary-7b-v2-GGUF [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v2PromptTemplate" that breaks down each turns into from, recipient and content portions. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` <|from|>system <|recipient|>all <|content|>// Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions <|from|>system <|recipient|>all <|content|>A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary <|from|>user <|recipient|>all <|content|>What is the weather for Istanbul? ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v2.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
amazingYX/myppo-LunarLander-v2
amazingYX
2024-01-24T09:04:35Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T08:59:05Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 194.91 +/- 89.54 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
meetkai/functionary-7b-v1.4
meetkai
2024-01-24T09:04:34Z
20
2
transformers
[ "transformers", "pytorch", "mistral", "text-generation", "conversational", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2023-11-22T09:07:05Z
# Model Card for functionary-7b-v1.4 [https://github.com/MeetKai/functionary](https://github.com/MeetKai/functionary) ![Functionary Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/functionary_logo.jpg "Functionary Logo") Functionary is a language model that can interpret and execute functions/plugins. The model determines when to execute functions, whether in parallel or serially, and can understand their outputs. It only triggers functions as needed. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. ## Key Features - Intelligent **parallel tool use** - Able to analyze functions/tools outputs and provide relevant responses **grounded in the outputs** - Able to decide **when to not use tools/call functions** and provide normal chat response - Truly one of the best open-source alternative to GPT-4 ## Performance Our model achieves achieves state-of-the-art performance in Function Calling Accuracy on our in-house dataset. The accuracy metric measures the overall correctness of predicted function calls, including function name prediction and arguments extraction. ![Eval Chart](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/evaluation_chart.jpeg "Eval Chart") | Dataset | Model Name | Function Calling Accuracy (Name & Arguments) | | :-------------| :-------------------| ---------------------------: | | In-house data | MeetKai-functionary-small-v2.2 | 0.546| | In-house data | MeetKai-functionary-medium-v2.2 | **0.664**| | In-house data | OpenAI-gpt-3.5-turbo-1106 | 0.531 | | In-house data | OpenAI-gpt-4-1106-preview | **0.737** | ## Prompt Template We use a specially designed prompt template which we call "v1PromptTemplate" that uses a variety of special tokens in each turn. We convert function definitions to a similar text to TypeScript definitions. Then we inject these definitions as system prompts. After that, we inject the default system prompt. Then we start the conversation messages. This formatting is also available via our vLLM server which we process the functions into Typescript definitions encapsulated in a system message and use a pre-defined Transformers chat template. This means that lists of messages can be formatted for you with the apply_chat_template() method within our server: ```python from openai import OpenAI client = OpenAI(base_url="http://localhost:8000/v1", api_key="functionary") client.chat.completions.create( model="path/to/functionary/model/", messages=[{"role": "user", "content": "What is the weather for Istanbul?"} ], tools=[{ "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } }, "required": ["location"] } } }], tool_choice="auto" ) ``` will yield: ``` system: // Supported function definitions that should be called when necessary. namespace functions { // Get the current weather type get_current_weather = (_: { // The city and state, e.g. San Francisco, CA location: string, }) => any; } // namespace functions<|END_OF_SYSTEM|> system: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. The assistant calls functions with appropriate input when necessary<|END_OF_SYSTEM|> user: What is the weather for Istanbul?<|END_OF_USER|> ``` A more detailed example is provided [here](https://github.com/MeetKai/functionary/blob/main/tests/prompt_test_v1.txt). ## Run the model We encourage users to run our models using our OpenAI-compatible vLLM server [here](https://github.com/MeetKai/functionary). # The MeetKai Team ![MeetKai Logo](https://huggingface.co/meetkai/functionary-medium-v2.2/resolve/main/meetkai_logo.png "MeetKai Logo")
DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K
DAMO-NLP-SG
2024-01-24T08:56:50Z
10
3
transformers
[ "transformers", "safetensors", "mixtral", "text-generation", "custom_code", "dataset:DAMO-NLP-SG/LongCorpus-2.5B", "arxiv:2310.16450", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-19T08:55:27Z
--- license: mit datasets: - DAMO-NLP-SG/LongCorpus-2.5B --- # CLEX: Continuous Length Extrapolation for Large Language Models This repo stores the checkpoint of CLEX-Mixtral-8x7B-32K. ## Features and Highlights of CLEX ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf) - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required. - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)). - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation. If you have any questions, feel free to contact us. (Emails: [email protected], [email protected]) ## Model Zoo <div align="center"> | Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo | |:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:| | CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) | | CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) | | CLEX-LLaMA-2-7B-64K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K) | | CLEX-Phi-2-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Phi-2-32K) | | **CLEX-Mixtral-8x7B-32K** (this checkpoint) | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K) | | CLEX-Mixtral-8x7B-Chat-32k | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K) | </div> ## Usage ```bash import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K", torch_dtype=torch.bfloat16, trust_remote_code=True) inputs = tokenizer("What is CLEX?", return_tensors="pt") sample = model.generate(**inputs, max_length=128) print(tokenizer.decode(sample[0])) ``` ## Evaluation ### Language Modelling The CLEX-Phi-2-2.7B and CLEX-Mixtral-8x7B are trained on [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B), where the eval results on test set are listed below. | | Train Length | Eval.(32k) | Eval.(64k) | Eval.(128k) | Eval.(256k) | | ----------------- | ------------ | ---------- | ---------- | ----------- | ----------- | | Mixtral-8x7B | 32k | 2.78 | 3.44 | 5.88 | 14.20 | | CLEX-Mixtral-8x7B | 32k | 2.56 | 2.53 | 2.57 | 3.78 | ## Citation If you find our project useful, hope you can star our repo and cite our paper as follows: ``` @article{damonlpsg2023clex, author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong}, title = {CLEX: Continuous Length Extrapolation for Large Language Models}, year = 2023, journal = {arXiv preprint arXiv:2310.16450}, url = {https://arxiv.org/abs/2310.16450} } ```
imagepipeline/Animagine-XL-v3
imagepipeline
2024-01-24T08:54:10Z
43
1
diffusers
[ "diffusers", "imagepipeline", "imagepipeline.io", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-01-24T08:51:01Z
--- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ## Animagine-XL-v3 <img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d361fb61-ba8e-485e-a2f3-cdbbca82bad8/original=true/ComfyUI_00673_.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;"> **This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)** Model details - Recommended settings: To guide the model towards generating high-aesthetic images, use negative prompts like: nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name. For higher quality outcomes, prepend prompts with: masterpiece, best quality. However, be careful to use masterpiece, best quality because many high-scored datasets are NSFW. It’s better to add nsfw, rating: sensitive to the negative prompt and rating: general to the positive prompt. it’s recommended to use a lower classifier-free guidance (CFG Scale) of around 5-7, sampling steps below 30, and to use Euler Ancestral (Euler a) as a sampler. [![Try this model](https://img.shields.io/badge/try_this_model-image_pipeline-BD9319)](https://imagepipeline.io/models/Animagine-XL-v3?id=bd1a6e69-8df3-4735-9817-991d190c3cb6/) ## How to try this model ? You can try using it locally or send an API call to test the output quality. Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required. Coding in `php` `javascript` `node` etc ? Checkout our documentation [![documentation](https://img.shields.io/badge/documentation-image_pipeline-blue)](https://docs.imagepipeline.io/docs/introduction) ```python import requests import json url = "https://imagepipeline.io/sdxl/text2image/v1/run" payload = json.dumps({ "model_id": "bd1a6e69-8df3-4735-9817-991d190c3cb6", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": false, "guidance_scale": 7.5, "multi_lingual": "no", "embeddings": "", "lora_models": "", "lora_weights": "" }) headers = { 'Content-Type': 'application/json', 'API-Key': 'your_api_key' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) } ``` Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` : [![All models](https://img.shields.io/badge/Get%20All%20Models-image_pipeline-BD9319)](https://imagepipeline.io/models) ### API Reference #### Generate Image ```http https://api.imagepipeline.io/sdxl/text2image/v1 ``` | Headers | Type | Description | |:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------| | `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) | | `Content-Type` | `str` | application/json - content type of the request body | | Parameter | Type | Description | | :-------- | :------- | :------------------------- | | `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own| | `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips | | `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) | | `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 | | `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page | | `lora_weights` | `str, array` | Strength of the LoRA effect | --- license: creativeml-openrail-m tags: - imagepipeline - imagepipeline.io - text-to-image - ultra-realistic pinned: false pipeline_tag: text-to-image --- ### Feedback If you have any feedback, please reach out to us at [email protected] #### 🔗 Visit Website [![portfolio](https://img.shields.io/badge/image_pipeline-BD9319?style=for-the-badge&logo=gocd&logoColor=white)](https://imagepipeline.io/) If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
HeydarS/opt-350m_peft_v5
HeydarS
2024-01-24T08:41:06Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "region:us" ]
null
2024-01-24T08:12:41Z
--- library_name: peft base_model: facebook/opt-350m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
ssarkar4445/tinyllama-colorist-peft
ssarkar4445
2024-01-24T08:39:08Z
79
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T08:54:36Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jiudth/ppo-Huggy
jiudth
2024-01-24T08:38:48Z
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-24T08:38:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: jiudth/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
internlm/internlm-chat-20b
internlm
2024-01-24T08:36:37Z
186
135
transformers
[ "transformers", "pytorch", "internlm", "feature-extraction", "text-generation", "custom_code", "license:apache-2.0", "region:us" ]
text-generation
2023-09-18T03:28:40Z
--- license: apache-2.0 pipeline_tag: text-generation --- **InternLM** <div align="center"> <img src="https://github.com/InternLM/InternLM/assets/22529082/b9788105-8892-4398-8b47-b513a292378e" width="200"/> <div>&nbsp;</div> <div align="center"> <b><font size="5">InternLM</font></b> <sup> <a href="https://internlm.intern-ai.org.cn/"> <i><font size="4">HOT</font></i> </a> </sup> <div>&nbsp;</div> </div> [![evaluation](https://github.com/InternLM/InternLM/assets/22529082/f80a2a58-5ddf-471a-8da4-32ab65c8fd3b)](https://github.com/internLM/OpenCompass/) [💻Github Repo](https://github.com/InternLM/InternLM) • [🤔Reporting Issues](https://github.com/InternLM/InternLM/issues/new) </div> ## Introduction The Shanghai Artificial Intelligence Laboratory, in collaboration with SenseTime Technology, the Chinese University of Hong Kong, and Fudan University, has officially released the 20 billion parameter pretrained model, InternLM-20B. InternLM-20B was pre-trained on over **2.3T** Tokens containing high-quality English, Chinese, and code data. Additionally, the Chat version has undergone SFT and RLHF training, enabling it to better and more securely meet users' needs. In terms of model structure, InternLM-20B opted for a deeper architecture, with a depth set at 60 layers. This surpasses the conventional 7B and 13B models that utilize 32 or 40 layers. When parameters are limited, increasing the number of layers can enhance the model's overall capability. Furthermore, compared to InternLM-7B, the pre-training data used for InternLM-20B underwent higher quality cleansing and was supplemented with data rich in knowledge and designed for reinforcing understanding and reasoning capabilities. As a result, it exhibits significant improvements in understanding, reasoning, mathematical, and programming abilities—all of which test the technical proficiency of language models. Overall, InternLM-20B features the following characteristics: - Outstanding overall performance - Strong utility invocation capability - Supports a 16k context length (Through infererence extrapolation) - Better value alignment. ## Performance Evaluation On the 5 capability dimensions proposed by OpenCompass, InternLM-20B has achieved excellent results (the bolded scores represent the best performances within the 13B-33B parameter range). | Capability | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |----------|-----------|------------|---------------|--------------|-----------|-----------|------------| | Language | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 | | Knowledge | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 | | Understanding | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 | | Reasoning | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 | | Examination | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 | | Overall | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 | The table below compares the performance of mainstream open-source models on some influential and typical datasets. | | Benchmarks | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------| | Examination | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 | | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 | | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 | | Knowledge | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 | | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 | | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 | | Understanding | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 | | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 | | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 | | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 | | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 | | Reasoning | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 | | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 | | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 | | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 | | Programming | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 | | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 | Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B. ## Import from Transformers To load the InternLM 20B model using Transformers, use the following code: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True) # Set `torch_dtype=torch.bfloat16` to load model in bfloat16, otherwise it will be loaded as float32 and cause OOM Error. model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", torch_dtype=torch.bfloat16, trust_remote_code=True).cuda() model = model.eval() output, history = model.chat(tokenizer, "Hello! Today is sunny, it is time to go out") print(output) # Hello! Today is sunny, and it sounds like a great day to go out an enjoy the weather. What would you like to do? ``` The responses can be streamed using `stream_chat`: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm-chat-20b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "Hello", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` **Limitations:** Although we have made efforts to ensure the safety of the model during the training process and to encourage the model to generate text that complies with ethical and legal requirements, the model may still produce unexpected outputs due to its size and probabilistic generation paradigm. For example, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We are not responsible for any consequences resulting from the dissemination of harmful information. ## Open Source License The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow **free** commercial usage. To apply for a commercial license, please fill in the [application form (English)](https://wj.qq.com/s2/12727483/5dba/)/[申请表(中文)](https://wj.qq.com/s2/12725412/f7c1/). For other questions or collaborations, please contact <[email protected]>. ## 简介 上海人工智能实验室与商汤科技联合香港中文大学和复旦大学正式推出书生·浦语200亿参数模型版本 InternLM-20B ,InternLM-20B 在超过 **2.3T** Tokens 包含高质量英文、中文和代码的数据上进行预训练,其中 Chat 版本还经过了 SFT 和 RLHF 训练,使其能够更好、更安全地满足用户的需求。 InternLM 20B 在模型结构上选择了深结构,层数设定为60层,超过常规7B和13B模型所使用的32层或者40层。在参数受限的情况下,提高层数有利于提高模型的综合能力。此外,相较于InternLM-7B,InternLM-20B使用的预训练数据经过了更高质量的清洗,并补充了高知识密度和用于强化理解与推理能力的训练数据。因此,它在理解能力、推理能力、数学能力、编程能力等考验语言模型技术水平的方面都得到了显著提升。总体而言,InternLM-20B具有以下的特点: - 优异的综合性能 - 很强的工具调用功能 - 支持16k语境长度(通过推理时外推) - 更好的价值对齐 ## 性能评测 在OpenCompass提出的5个能力维度上,InternLM-20B都取得很好的效果(粗体为13B-33B这个量级范围内,各项最佳成绩) | 能力维度 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |----------|-----------|------------|---------------|--------------|-----------|-----------|------------| | 语言 | 42.5 | 47 | 47.5 | **55** | 44.6 | 47.1 | 51.6 | | 知识 | 58.2 | 58.3 | 48.9 | 60.1 | **64** | 66 | 67.7 | | 理解 | 45.5 | 50.9 | 58.1 | **67.3** | 50.6 | 54.2 | 60.8 | | 推理 | 42.7 | 43.6 | 44.2 | **54.9** | 46.4 | 49.8 | 55 | | 学科 | 37.3 | 45.2 | 51.8 | **62.5** | 47.4 | 49.7 | 57.3 | | 总平均 | 43.8 | 47.3 | 49.4 | **59.2** | 48.9 | 51.9 | 57.4 | 下表展示了在多个经典数据集上 InternLM 20B 与各个主流开源模型的表现 | | 评测集 | Llama-13B | Llama2-13B | Baichuan2-13B | InternLM-20B | Llama-33B | Llama-65B | Llama2-70B | |------|------------------|-----------|------------|---------------|--------------|-----------|-----------|------------| | 学科 | MMLU | 47.73 | 54.99 | 59.55 | **62.05** | 58.73 | 63.71 | 69.75 | | | C-Eval (val) | 31.83 | 41.4 | **59.01** | 58.8 | 37.47 | 40.36 | 50.13 | | | AGI-Eval | 22.03 | 30.93 | 37.37 | **44.58** | 33.53 | 33.92 | 40.02 | | 知识 | BoolQ | 78.75 | 82.42 | 67 | **87.46** | 84.43 | 86.61 | 87.74 | | | TriviaQA | 52.47 | 59.36 | 46.61 | 57.26 | **66.24** | 69.79 | 70.71 | | | NaturalQuestions | 20.17 | 24.85 | 16.32 | 25.15 | **30.89** | 33.41 | 34.16 | | 理解 | CMRC | 9.26 | 31.59 | 29.85 | **68.78** | 14.17 | 34.73 | 43.74 | | | CSL | 55 | 58.75 | 63.12 | **65.62** | 57.5 | 59.38 | 60 | | | RACE (middle) | 53.41 | 63.02 | 68.94 | **86.35** | 64.55 | 72.35 | 81.55 | | | RACE (high) | 47.63 | 58.86 | 67.18 | **83.28** | 62.61 | 68.01 | 79.93 | | | XSum | 20.37 | 23.37 | 25.23 | **35.54** | 20.55 | 19.91 | 25.38 | | 推理 | WinoGrande | 64.64 | 64.01 | 67.32 | **69.38** | 66.85 | 69.38 | 69.77 | | | BBH | 37.93 | 45.62 | 48.98 | **52.51** | 49.98 | 58.38 | 64.91 | | | GSM8K | 20.32 | 29.57 | **52.62** | **52.62** | 42.3 | 54.44 | 63.31 | | | PIQA | 79.71 | 79.76 | 78.07 | 80.25 | **81.34** | 82.15 | 82.54 | | 编程 | HumanEval | 14.02 | 18.9 | 17.07 | **25.61** | 17.68 | 18.9 | 26.22 | | | MBPP | 20.6 | 26.8 | 30.8 | **35.6** | 28.4 | 33.6 | 39.6 | 总体而言,InternLM-20B 在综合能力上全面领先于13B量级的开源模型,同时在推理评测集上能够接近甚至超越Llama-65B的性能。 ## 通过 Transformers 加载 通过以下的代码加载 InternLM 20B 模型 ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-chat-20b", trust_remote_code=True) # `torch_dtype=torch.bfloat16` 可以令模型以 bfloat16 精度加载,否则 transformers 会将模型加载为 float32,导致显存不足 model = AutoModelForCausalLM.from_pretrained("internlm/internlm-chat-20b", torch_dtype=torch.bfloat16, trust_remote_code=True).cuda() model = model.eval() output, history = model.chat(tokenizer, "你好呀!今天天气真好") print(output) # 你好!是的,今天的天气非常晴朗,非常适合户外活动。 ``` 如果想进行流式生成,则可以使用 stream_chat 接口: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "internlm/internlm-chat-20b" model = AutoModelForCausalLM.from_pretrained(model_path, torch_dype=torch.bfloat16, trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) model = model.eval() length = 0 for response, history in model.stream_chat(tokenizer, "你好", history=[]): print(response[length:], flush=True, end="") length = len(response) ``` **局限性:** 尽管在训练过程中我们非常注重模型的安全性,尽力促使模型输出符合伦理和法律要求的文本,但受限于模型大小以及概率生成范式,模型可能会产生各种不符合预期的输出,例如回复内容包含偏见、歧视等有害内容,请勿传播这些内容。由于传播不良信息导致的任何后果,本项目不承担责任。 ## 开源许可证 本仓库的代码依照 Apache-2.0 协议开源。模型权重对学术研究完全开放,也可申请免费的商业使用授权([申请表](https://wj.qq.com/s2/12725412/f7c1/))。其他问题与合作请联系 <[email protected]>。
natutaro/distilbert-base-uncased-finetuned-emotion
natutaro
2024-01-24T08:36:25Z
90
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-24T08:26:36Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.924 - name: F1 type: f1 value: 0.9242530208994125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2109 - Accuracy: 0.924 - F1: 0.9243 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7872 | 1.0 | 250 | 0.2976 | 0.91 | 0.9100 | | 0.2384 | 2.0 | 500 | 0.2109 | 0.924 | 0.9243 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
IvanDart/More_AnimagineXL
IvanDart
2024-01-24T08:32:35Z
0
0
null
[ "license:other", "region:us" ]
null
2024-01-24T08:32:35Z
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ ---
mayflowergmbh/Wiedervereinigung-7b
mayflowergmbh
2024-01-24T08:26:51Z
9
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "DiscoResearch/DiscoLM_German_7b_v1", "DRXD1000/Phoenix", "VAGOsolutions/SauerkrautLM-7b-v1-mistral", "malteos/hermeo-7b", "base_model:DRXD1000/Phoenix-7B", "base_model:merge:DRXD1000/Phoenix-7B", "base_model:DiscoResearch/DiscoLM_German_7b_v1", "base_model:merge:DiscoResearch/DiscoLM_German_7b_v1", "base_model:VAGOsolutions/SauerkrautLM-7b-v1-mistral", "base_model:merge:VAGOsolutions/SauerkrautLM-7b-v1-mistral", "base_model:malteos/hermeo-7b", "base_model:merge:malteos/hermeo-7b", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T07:32:18Z
--- tags: - merge - mergekit - lazymergekit - DiscoResearch/DiscoLM_German_7b_v1 - DRXD1000/Phoenix - VAGOsolutions/SauerkrautLM-7b-v1-mistral - malteos/hermeo-7b base_model: - DiscoResearch/DiscoLM_German_7b_v1 - DRXD1000/Phoenix - VAGOsolutions/SauerkrautLM-7b-v1-mistral - malteos/hermeo-7b --- # Wiedervereinigung-7b ![image/png](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b/resolve/main/Wiedervereinigung-7b.png) Wiedervereinigung-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1) * [DRXD1000/Phoenix](https://huggingface.co/DRXD1000/Phoenix) * [VAGOsolutions/SauerkrautLM-7b-v1-mistral](https://huggingface.co/VAGOsolutions/SauerkrautLM-7b-v1-mistral) * [malteos/hermeo-7b](https://huggingface.co/malteos/hermeo-7b) ## 🧩 Configuration ```yaml models: - model: LeoLM/leo-mistral-hessianai-7b # No parameters necessary for base model - model: DiscoResearch/DiscoLM_German_7b_v1 parameters: density: 0.6 weight: 0.25 - model: DRXD1000/Phoenix parameters: density: 0.6 weight: 0.25 - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral parameters: density: 0.6 weight: 0.25 - model: malteos/hermeo-7b parameters: density: 0.6 weight: 0.25 merge_method: dare_ties base_model: LeoLM/leo-mistral-hessianai-7b parameters: int8_mask: true dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "mayflowergmbh/Wiedervereinigung-7b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
HeydarS/opt-350m_peft_v6
HeydarS
2024-01-24T08:26:37Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:facebook/opt-350m", "base_model:adapter:facebook/opt-350m", "region:us" ]
null
2024-01-24T08:26:33Z
--- library_name: peft base_model: facebook/opt-350m --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Gayathri142214002/Question_Generation_ComQ_5_2
Gayathri142214002
2024-01-24T08:23:19Z
5
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Gayathri142214002/Question_Generation_ComQ_4", "base_model:finetune:Gayathri142214002/Question_Generation_ComQ_4", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-24T07:13:19Z
--- license: apache-2.0 base_model: Gayathri142214002/Question_Generation_ComQ_4 tags: - generated_from_trainer model-index: - name: Question_Generation_ComQ_5_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Question_Generation_ComQ_5_2 This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_4](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_4) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.3089 | 0.28 | 100 | 0.2672 | | 0.3228 | 0.56 | 200 | 0.2790 | | 0.3151 | 0.84 | 300 | 0.2883 | | 0.2723 | 1.12 | 400 | 0.2932 | | 0.2577 | 1.39 | 500 | 0.3135 | | 0.2693 | 1.67 | 600 | 0.3270 | | 0.269 | 1.95 | 700 | 0.3046 | | 0.2263 | 2.23 | 800 | 0.3335 | | 0.2215 | 2.51 | 900 | 0.3325 | | 0.2504 | 2.79 | 1000 | 0.3301 | | 0.2184 | 3.07 | 1100 | 0.3324 | | 0.1991 | 3.35 | 1200 | 0.3462 | | 0.203 | 3.63 | 1300 | 0.3452 | | 0.2156 | 3.91 | 1400 | 0.3416 | | 0.1889 | 4.18 | 1500 | 0.3565 | | 0.1783 | 4.46 | 1600 | 0.3590 | | 0.196 | 4.74 | 1700 | 0.3569 | | 0.1994 | 5.02 | 1800 | 0.3500 | | 0.1593 | 5.3 | 1900 | 0.3588 | | 0.1761 | 5.58 | 2000 | 0.3642 | | 0.1729 | 5.86 | 2100 | 0.3651 | | 0.1708 | 6.14 | 2200 | 0.3652 | | 0.1661 | 6.42 | 2300 | 0.3670 | | 0.1522 | 6.69 | 2400 | 0.3684 | | 0.1484 | 6.97 | 2500 | 0.3698 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Lalith16/Zephyr-7B-CC-finetuned-model
Lalith16
2024-01-24T08:17:37Z
0
0
null
[ "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:finetune:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2024-01-24T08:17:05Z
--- license: mit base_model: HuggingFaceH4/zephyr-7b-beta tags: - trl - sft - generated_from_trainer model-index: - name: results results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6670 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00025 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.7604 | 0.35 | 100 | 0.9897 | | 0.6403 | 0.69 | 200 | 0.8543 | | 0.6267 | 1.04 | 300 | 0.7156 | | 0.5962 | 1.39 | 400 | 0.7106 | | 0.5715 | 1.74 | 500 | 0.6555 | | 0.4264 | 2.08 | 600 | 0.6715 | | 0.4729 | 2.43 | 700 | 0.6421 | | 0.4342 | 2.78 | 800 | 0.6459 | | 0.3264 | 3.12 | 900 | 0.6558 | | 0.3497 | 3.47 | 1000 | 0.6695 | | 0.3517 | 3.82 | 1100 | 0.6312 | | 0.3116 | 4.17 | 1200 | 0.6810 | | 0.3324 | 4.51 | 1300 | 0.7153 | | 0.3497 | 4.86 | 1400 | 0.6670 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
bartowski/Orion-14B-Chat-exl2
bartowski
2024-01-24T08:10:30Z
0
1
null
[ "text-generation", "region:us" ]
text-generation
2024-01-24T07:41:11Z
--- quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Orion-14B-Chat Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization. # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/OrionStarAI/Orion-14B-Chat No GQA - VRAM requirements will be higher | Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description | | ----- | ---- | ------- | ------ | ------ | ------------ | | [6_5](https://huggingface.co/Bartowski/Orion-14B-Chat-exl2/tree/6_5) | 6.5 | 8.0 | 14.4 GB | 24.0 GB | Near unquantized performance at vastly reduced size, **recommended**. | | [5_0](https://huggingface.co/Bartowski/Orion-14B-Chat-exl2/tree/5_0) | 5.0 | 6.0 | 12.1 GB | 21.7 GB | Slightly lower perplexity vs 6.5, can fit in 12 GB card with even lower context. | | [4_25](https://huggingface.co/Bartowski/Orion-14B-Chat-exl2/tree/4_25) | 4.25 | 6.0 | 10.9 GB | 20.5 GB | GPTQ equivalent bits per weight. | | [3_75](https://huggingface.co/Bartowski/Orion-14B-Chat-exl2/tree/3_75) | 3.75 | 6.0 | 10.1 GB | 19.7 GB | Lower quality but still generally usable. | | [3_0](https://huggingface.co/Bartowski/Orion-14B-Chat-exl2/tree/3_0) | 3.0 | 6.0 | 9.1 GB | 18.7 GB | Very low quality, not recommended unless you have to. | VRAM requirements listed for both 4k context and 16k context since without GQA the differences are massive (9.6 GB) ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Orion-14B-Chat-exl2 Orion-14B-Chat-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Orion-14B-Chat-exl2`: ```shell mkdir Orion-14B-Chat-exl2 huggingface-cli download bartowski/Orion-14B-Chat-exl2 --local-dir Orion-14B-Chat-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Orion-14B-Chat-exl2-6_5 huggingface-cli download bartowski/Orion-14B-Chat-exl2 --revision 6_5 --local-dir Orion-14B-Chat-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Orion-14B-Chat-exl2-6.5 huggingface-cli download bartowski/Orion-14B-Chat-exl2 --revision 6_5 --local-dir Orion-14B-Chat-exl2-6.5 --local-dir-use-symlinks False ```
Oztobuzz/my_testing_mlm_model
Oztobuzz
2024-01-24T08:04:28Z
96
0
transformers
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:vinai/phobert-base-v2", "base_model:finetune:vinai/phobert-base-v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-06T11:00:13Z
--- base_model: vinai/phobert-base-v2 tags: - generated_from_trainer model-index: - name: my_testing_mlm_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_testing_mlm_model This model is a fine-tuned version of [vinai/phobert-base-v2](https://huggingface.co/vinai/phobert-base-v2) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Omar95farag/2024-01-03_one_stage_subgraphs_entropyreg_txt_vis_conc_6_ramp
Omar95farag
2024-01-24T07:59:41Z
91
0
transformers
[ "transformers", "pytorch", "layoutlmv3", "text-classification", "generated_from_trainer", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-17T11:48:21Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: 2024-01-03_one_stage_subgraphs_entropyreg_txt_vis_conc_6_ramp results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 2024-01-03_one_stage_subgraphs_entropyreg_txt_vis_conc_6_ramp This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1266 - Accuracy: 0.705 - Exit 0 Accuracy: 0.195 - Exit 1 Accuracy: 0.7025 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 24 - total_train_batch_size: 192 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:| | No log | 0.96 | 4 | 2.7544 | 0.115 | 0.0575 | 0.0625 | | No log | 1.96 | 8 | 2.6911 | 0.135 | 0.125 | 0.0625 | | No log | 2.96 | 12 | 2.6410 | 0.1775 | 0.1225 | 0.18 | | No log | 3.96 | 16 | 2.5664 | 0.2025 | 0.125 | 0.1825 | | No log | 4.96 | 20 | 2.5036 | 0.2475 | 0.1225 | 0.2475 | | No log | 5.96 | 24 | 2.4172 | 0.28 | 0.12 | 0.2275 | | No log | 6.96 | 28 | 2.3247 | 0.3 | 0.1275 | 0.2225 | | No log | 7.96 | 32 | 2.2355 | 0.36 | 0.14 | 0.2525 | | No log | 8.96 | 36 | 2.1384 | 0.4025 | 0.1375 | 0.315 | | No log | 9.96 | 40 | 2.0150 | 0.465 | 0.14 | 0.3475 | | No log | 10.96 | 44 | 1.9193 | 0.4925 | 0.1425 | 0.37 | | No log | 11.96 | 48 | 1.7777 | 0.5375 | 0.145 | 0.4325 | | No log | 12.96 | 52 | 1.6960 | 0.56 | 0.15 | 0.5 | | No log | 13.96 | 56 | 1.5905 | 0.59 | 0.155 | 0.49 | | No log | 14.96 | 60 | 1.5197 | 0.625 | 0.155 | 0.5275 | | No log | 15.96 | 64 | 1.4335 | 0.6475 | 0.1525 | 0.5425 | | No log | 16.96 | 68 | 1.3831 | 0.6575 | 0.1575 | 0.5675 | | No log | 17.96 | 72 | 1.3216 | 0.6775 | 0.155 | 0.575 | | No log | 18.96 | 76 | 1.2973 | 0.6825 | 0.1575 | 0.5825 | | No log | 19.96 | 80 | 1.2342 | 0.6975 | 0.1575 | 0.6025 | | No log | 20.96 | 84 | 1.2190 | 0.6825 | 0.16 | 0.605 | | No log | 21.96 | 88 | 1.1758 | 0.7125 | 0.1625 | 0.62 | | No log | 22.96 | 92 | 1.1612 | 0.685 | 0.1675 | 0.625 | | No log | 23.96 | 96 | 1.1329 | 0.6925 | 0.1675 | 0.64 | | No log | 24.96 | 100 | 1.1001 | 0.7125 | 0.1675 | 0.635 | | No log | 25.96 | 104 | 1.0943 | 0.7025 | 0.175 | 0.645 | | No log | 26.96 | 108 | 1.0794 | 0.7125 | 0.18 | 0.6475 | | No log | 27.96 | 112 | 1.0919 | 0.6925 | 0.185 | 0.6475 | | No log | 28.96 | 116 | 1.0630 | 0.72 | 0.1875 | 0.6575 | | No log | 29.96 | 120 | 1.0831 | 0.7 | 0.1875 | 0.655 | | No log | 30.96 | 124 | 1.0581 | 0.695 | 0.1875 | 0.6625 | | No log | 31.96 | 128 | 1.0588 | 0.715 | 0.1875 | 0.66 | | No log | 32.96 | 132 | 1.0624 | 0.6975 | 0.185 | 0.675 | | No log | 33.96 | 136 | 1.0355 | 0.71 | 0.1875 | 0.675 | | No log | 34.96 | 140 | 1.0777 | 0.6925 | 0.1875 | 0.665 | | No log | 35.96 | 144 | 1.0514 | 0.71 | 0.19 | 0.675 | | No log | 36.96 | 148 | 1.0678 | 0.7 | 0.1925 | 0.6825 | | No log | 37.96 | 152 | 1.0610 | 0.7025 | 0.1925 | 0.68 | | No log | 38.96 | 156 | 1.0726 | 0.7025 | 0.195 | 0.69 | | No log | 39.96 | 160 | 1.0818 | 0.7025 | 0.195 | 0.69 | | No log | 40.96 | 164 | 1.0893 | 0.6975 | 0.1925 | 0.685 | | No log | 41.96 | 168 | 1.0980 | 0.695 | 0.195 | 0.69 | | No log | 42.96 | 172 | 1.1009 | 0.7025 | 0.1925 | 0.6925 | | No log | 43.96 | 176 | 1.0896 | 0.705 | 0.1925 | 0.695 | | No log | 44.96 | 180 | 1.0697 | 0.7125 | 0.1925 | 0.695 | | No log | 45.96 | 184 | 1.1185 | 0.7025 | 0.1925 | 0.695 | | No log | 46.96 | 188 | 1.0956 | 0.705 | 0.1925 | 0.6925 | | No log | 47.96 | 192 | 1.1095 | 0.71 | 0.19 | 0.6975 | | No log | 48.96 | 196 | 1.1233 | 0.7075 | 0.1925 | 0.7025 | | No log | 49.96 | 200 | 1.1281 | 0.705 | 0.1925 | 0.7025 | | No log | 50.96 | 204 | 1.1428 | 0.6975 | 0.1925 | 0.7025 | | No log | 51.96 | 208 | 1.1292 | 0.7025 | 0.1925 | 0.71 | | No log | 52.96 | 212 | 1.1218 | 0.7025 | 0.19 | 0.7125 | | No log | 53.96 | 216 | 1.1143 | 0.7075 | 0.1925 | 0.7025 | | No log | 54.96 | 220 | 1.1192 | 0.7125 | 0.195 | 0.7025 | | No log | 55.96 | 224 | 1.1338 | 0.715 | 0.195 | 0.7025 | | No log | 56.96 | 228 | 1.1333 | 0.71 | 0.195 | 0.7075 | | No log | 57.96 | 232 | 1.1291 | 0.7025 | 0.195 | 0.7025 | | No log | 58.96 | 236 | 1.1268 | 0.705 | 0.195 | 0.705 | | No log | 59.96 | 240 | 1.1266 | 0.705 | 0.195 | 0.7025 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
TitanTec/ppo-LunaInvader-T2
TitanTec
2024-01-24T07:51:28Z
0
0
null
[ "tensorboard", "LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T06:48:04Z
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -6.88 +/- 50.80 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'gym_id': 'LunarLander-v2' 'total_timesteps': 1000000 'learning_rate': 0.0001 'num_envs': 8 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_gamma': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'TitanTec/ppo-LunaInvader-T2' 'batch_size': 1024 'minibatch_size': 256} ```
Deepakkori45/AspectExtraction_instruct
Deepakkori45
2024-01-24T07:50:30Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-24T07:50:22Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
taki0112/lora-trained-xl_craft-clay_split
taki0112
2024-01-24T07:45:24Z
2
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-19T11:29:30Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a deer in sks style' output: url: "image_0.png" - text: 'a deer in sks style' output: url: "image_1.png" - text: 'a deer in sks style' output: url: "image_2.png" - text: 'a deer in sks style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a dog in sks style license: openrail++ --- # SDXL LoRA DreamBooth - taki0112/lora-trained-xl_craft-clay_split <Gallery /> ## Model description These are taki0112/lora-trained-xl_craft-clay_split LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a dog in sks style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](taki0112/lora-trained-xl_craft-clay_split/tree/main) them in the Files & versions tab.
taki0112/lora-trained-xl_photo_split
taki0112
2024-01-24T07:44:50Z
1
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-19T11:28:53Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a bird in sks style' output: url: "image_0.png" - text: 'a bird in sks style' output: url: "image_1.png" - text: 'a bird in sks style' output: url: "image_2.png" - text: 'a bird in sks style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a cloud in sks style license: openrail++ --- # SDXL LoRA DreamBooth - taki0112/lora-trained-xl_photo_split <Gallery /> ## Model description These are taki0112/lora-trained-xl_photo_split LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a cloud in sks style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](taki0112/lora-trained-xl_photo_split/tree/main) them in the Files & versions tab.
taki0112/lora-trained-xl_anime_split
taki0112
2024-01-24T07:43:35Z
4
1
diffusers
[ "diffusers", "tensorboard", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-19T11:29:41Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: 'a lion in sks style' output: url: "image_0.png" - text: 'a lion in sks style' output: url: "image_1.png" - text: 'a lion in sks style' output: url: "image_2.png" - text: 'a lion in sks style' output: url: "image_3.png" base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a cat in sks style license: openrail++ --- # SDXL LoRA DreamBooth - taki0112/lora-trained-xl_anime_split <Gallery /> ## Model description These are taki0112/lora-trained-xl_anime_split LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a cat in sks style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](taki0112/lora-trained-xl_anime_split/tree/main) them in the Files & versions tab.
tourist800/ORKG-finetuned-llama-7b-chat
tourist800
2024-01-24T07:33:16Z
0
0
peft
[ "peft", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "base_model:adapter:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-01-24T07:32:27Z
--- library_name: peft base_model: NousResearch/Llama-2-7b-chat-hf --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.6.0
stablediffusionapi/samaritan-3d-v4
stablediffusionapi
2024-01-24T07:27:27Z
29
1
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
2024-01-24T07:25:33Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # samaritan-3d-v4 API Inference ![generated from modelslab.com](https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b252b178-0b3e-48c1-9951-2a6689a6565c/width=450/00044-213968932.jpeg) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "samaritan-3d-v4" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/samaritan-3d-v4) Model link: [View model](https://modelslab.com/models/samaritan-3d-v4) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "samaritan-3d-v4", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
ntc-ai/SDXL-LoRA-slider.serenity-film-still
ntc-ai
2024-01-24T07:26:25Z
115
0
diffusers
[ "diffusers", "text-to-image", "stable-diffusion-xl", "lora", "template:sd-lora", "template:sdxl-lora", "sdxl-sliders", "ntcai.xyz-sliders", "concept", "en", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:mit", "region:us" ]
text-to-image
2024-01-24T07:26:19Z
--- language: - en thumbnail: "images/evaluate/serenity film still.../serenity film still_17_3.0.png" widget: - text: serenity film still output: url: images/serenity film still_17_3.0.png - text: serenity film still output: url: images/serenity film still_19_3.0.png - text: serenity film still output: url: images/serenity film still_20_3.0.png - text: serenity film still output: url: images/serenity film still_21_3.0.png - text: serenity film still output: url: images/serenity film still_22_3.0.png tags: - text-to-image - stable-diffusion-xl - lora - template:sd-lora - template:sdxl-lora - sdxl-sliders - ntcai.xyz-sliders - concept - diffusers license: "mit" inference: false instance_prompt: "serenity film still" base_model: "stabilityai/stable-diffusion-xl-base-1.0" --- # ntcai.xyz slider - serenity film still (SDXL LoRA) | Strength: -3 | Strength: 0 | Strength: 3 | | --- | --- | --- | | <img src="images/serenity film still_17_-3.0.png" width=256 height=256 /> | <img src="images/serenity film still_17_0.0.png" width=256 height=256 /> | <img src="images/serenity film still_17_3.0.png" width=256 height=256 /> | | <img src="images/serenity film still_19_-3.0.png" width=256 height=256 /> | <img src="images/serenity film still_19_0.0.png" width=256 height=256 /> | <img src="images/serenity film still_19_3.0.png" width=256 height=256 /> | | <img src="images/serenity film still_20_-3.0.png" width=256 height=256 /> | <img src="images/serenity film still_20_0.0.png" width=256 height=256 /> | <img src="images/serenity film still_20_3.0.png" width=256 height=256 /> | ## Download Weights for this model are available in Safetensors format. ## Trigger words You can apply this LoRA with trigger words for additional effect: ``` serenity film still ``` ## Use in diffusers ```python from diffusers import StableDiffusionXLPipeline from diffusers import EulerAncestralDiscreteScheduler import torch pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors") pipe.to("cuda") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) # Load the LoRA pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.serenity-film-still', weight_name='serenity film still.safetensors', adapter_name="serenity film still") # Activate the LoRA pipe.set_adapters(["serenity film still"], adapter_weights=[2.0]) prompt = "medieval rich kingpin sitting in a tavern, serenity film still" negative_prompt = "nsfw" width = 512 height = 512 num_inference_steps = 10 guidance_scale = 2 image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0] image.save('result.png') ``` ## Support the Patreon If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI). By joining our Patreon, you'll gain access to an ever-growing library of over 1140+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities. Your support on Patreon will allow us to continue developing and refining new models. ## Other resources - [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs - [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
cris177/Orca-Hermes-7B-slerp
cris177
2024-01-24T07:22:56Z
1,356
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "Open-Orca/Mistral-7B-OpenOrca", "teknium/OpenHermes-2.5-Mistral-7B", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T22:34:35Z
--- license: apache-2.0 tags: - merge - mergekit - Open-Orca/Mistral-7B-OpenOrca - teknium/OpenHermes-2.5-Mistral-7B --- # Orca-Hermes-7B-slerp Orca-Hermes-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [Open-Orca/Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca) * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Open-Orca/Mistral-7B-OpenOrca layer_range: [0, 32] - model: teknium/OpenHermes-2.5-Mistral-7B layer_range: [0, 32] merge_method: slerp base_model: Open-Orca/Mistral-7B-OpenOrca tokenizer_source: base parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
sumangpt/zephyr-support-chatbot
sumangpt
2024-01-24T07:17:10Z
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:adapter:HuggingFaceH4/zephyr-7b-beta", "license:mit", "region:us" ]
null
2024-01-24T06:45:39Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: HuggingFaceH4/zephyr-7b-beta model-index: - name: zephyr-support-chatbot results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # zephyr-support-chatbot This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 250 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0
ansilmbabl/swin-tiny-patch4-window7-224-finetuned-eurosat
ansilmbabl
2024-01-24T07:15:36Z
175
0
transformers
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "base_model:finetune:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
2024-01-24T06:53:51Z
--- license: apache-2.0 base_model: microsoft/swin-tiny-patch4-window7-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9755555555555555 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0690 - Accuracy: 0.9756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2345 | 1.0 | 190 | 0.1822 | 0.9348 | | 0.1618 | 2.0 | 380 | 0.0947 | 0.9670 | | 0.1439 | 3.0 | 570 | 0.0690 | 0.9756 | ### Framework versions - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
merge-tester-31256/Mage-13b
merge-tester-31256
2024-01-24T07:05:41Z
8
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "Sao10K/Ana-v1-m7", "mlabonne/NeuralBeagle14-7B", "conversational", "base_model:Sao10K/Ana-v1-m7", "base_model:merge:Sao10K/Ana-v1-m7", "base_model:mlabonne/NeuralBeagle14-7B", "base_model:merge:mlabonne/NeuralBeagle14-7B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T07:01:48Z
--- tags: - merge - mergekit - lazymergekit - Sao10K/Ana-v1-m7 - mlabonne/NeuralBeagle14-7B base_model: - Sao10K/Ana-v1-m7 - mlabonne/NeuralBeagle14-7B --- # Mage-13b Mage-13b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Sao10K/Ana-v1-m7](https://huggingface.co/Sao10K/Ana-v1-m7) * [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) ## 🧩 Configuration ```yaml slices: - sources: - model: Sao10K/Ana-v1-m7 layer_range: [0, 32] - model: mlabonne/NeuralBeagle14-7B layer_range: [0, 32] merge_method: slerp base_model: mlabonne/NeuralBeagle14-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "merge-tester-31256/Mage-13b" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
Gayathri142214002/Question_Generation_ComQ_6_2
Gayathri142214002
2024-01-24T07:01:02Z
5
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:Gayathri142214002/Question_Generation_ComQ_5", "base_model:finetune:Gayathri142214002/Question_Generation_ComQ_5", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-24T06:21:34Z
--- license: apache-2.0 base_model: Gayathri142214002/Question_Generation_ComQ_5 tags: - generated_from_trainer model-index: - name: Question_Generation_ComQ_6_2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Question_Generation_ComQ_6_2 This model is a fine-tuned version of [Gayathri142214002/Question_Generation_ComQ_5](https://huggingface.co/Gayathri142214002/Question_Generation_ComQ_5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3553 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2627 | 0.25 | 100 | 0.2558 | | 0.3047 | 0.51 | 200 | 0.2629 | | 0.2856 | 0.76 | 300 | 0.2784 | | 0.2711 | 1.02 | 400 | 0.2951 | | 0.2411 | 1.27 | 500 | 0.3052 | | 0.2528 | 1.53 | 600 | 0.2968 | | 0.2443 | 1.78 | 700 | 0.2970 | | 0.2451 | 2.04 | 800 | 0.3079 | | 0.2095 | 2.29 | 900 | 0.3276 | | 0.2152 | 2.54 | 1000 | 0.3198 | | 0.2288 | 2.8 | 1100 | 0.3175 | | 0.218 | 3.05 | 1200 | 0.3187 | | 0.1876 | 3.31 | 1300 | 0.3400 | | 0.2007 | 3.56 | 1400 | 0.3390 | | 0.2113 | 3.82 | 1500 | 0.3313 | | 0.2037 | 4.07 | 1600 | 0.3351 | | 0.1699 | 4.33 | 1700 | 0.3525 | | 0.1916 | 4.58 | 1800 | 0.3478 | | 0.1967 | 4.83 | 1900 | 0.3392 | | 0.1847 | 5.09 | 2000 | 0.3451 | | 0.165 | 5.34 | 2100 | 0.3496 | | 0.1737 | 5.6 | 2200 | 0.3508 | | 0.1699 | 5.85 | 2300 | 0.3494 | | 0.1652 | 6.11 | 2400 | 0.3527 | | 0.1605 | 6.36 | 2500 | 0.3531 | | 0.1623 | 6.62 | 2600 | 0.3548 | | 0.1507 | 6.87 | 2700 | 0.3553 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
Cuphadi/ppo-LunarLander-v2
Cuphadi
2024-01-24T06:55:55Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T06:55:36Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 253.52 +/- 22.31 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
srihariEmids/phi-2-finetuned-emids-text
srihariEmids
2024-01-24T06:36:34Z
7
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "dataset:srihariEmids/emids_data_small", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T06:18:19Z
--- library_name: transformers datasets: - srihariEmids/emids_data_small metrics: - accuracy - bertscore pipeline_tag: text-generation --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
anodare/ppo-LunarLander-v2
anodare
2024-01-24T06:33:42Z
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T02:20:00Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 285.61 +/- 22.84 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
himanshue2e/corgy_dog_LoRA
himanshue2e
2024-01-24T06:29:56Z
0
0
diffusers
[ "diffusers", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
2024-01-19T08:38:32Z
--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: a photo of TOK dog license: openrail++ --- # SDXL LoRA DreamBooth - himanshugrad/corgy_dog_LoRA <Gallery /> ## Model description These are himanshugrad/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](himanshugrad/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
bulkbeings/Mistral-v1
bulkbeings
2024-01-24T06:21:42Z
6
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T06:04:47Z
--- license: mit pipeline_tag: conversational ---
tsobolev/ppo-LunarLander-v2
tsobolev
2024-01-24T06:16:51Z
1
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T04:43:15Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 204.53 +/- 32.61 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
linhcuem/checker_TB_yolov8_ver1
linhcuem
2024-01-24T06:12:25Z
1
0
ultralytics
[ "ultralytics", "tensorboard", "v8", "ultralyticsplus", "yolov8", "yolo", "vision", "object-detection", "pytorch", "model-index", "region:us" ]
object-detection
2024-01-24T06:12:16Z
--- tags: - ultralyticsplus - yolov8 - ultralytics - yolo - vision - object-detection - pytorch library_name: ultralytics library_version: 8.0.43 inference: false model-index: - name: linhcuem/checker_TB_yolov8_ver1 results: - task: type: object-detection metrics: - type: precision # since [email protected] is not available on hf.co/metrics value: 0.9628 # min: 0.0 - max: 1.0 name: [email protected](box) --- <div align="center"> <img width="640" alt="linhcuem/checker_TB_yolov8_ver1" src="https://huggingface.co/linhcuem/checker_TB_yolov8_ver1/resolve/main/thumbnail.jpg"> </div> ### Supported Labels ``` ['bom_gen', 'bom_jn', 'bom_knp', 'bom_sachet', 'bom_vtgk', 'bom_ytv', 'hop_dln', 'hop_jn', 'hop_vtg', 'hop_ytv', 'lo_kids', 'lo_ytv', 'loc_dln', 'loc_jn', 'loc_kids', 'loc_ytv', 'pocky', 'tui_gen', 'tui_jn', 'tui_sachet', 'tui_vtgk'] ``` ### How to use - Install [ultralyticsplus](https://github.com/fcakyon/ultralyticsplus): ```bash pip install ultralyticsplus==0.0.28 ultralytics==8.0.43 ``` - Load model and perform prediction: ```python from ultralyticsplus import YOLO, render_result # load model model = YOLO('linhcuem/checker_TB_yolov8_ver1') # set model parameters model.overrides['conf'] = 0.25 # NMS confidence threshold model.overrides['iou'] = 0.45 # NMS IoU threshold model.overrides['agnostic_nms'] = False # NMS class-agnostic model.overrides['max_det'] = 1000 # maximum number of detections per image # set image image = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' # perform inference results = model.predict(image) # observe results print(results[0].boxes) render = render_result(model=model, image=image, result=results[0]) render.show() ```
pythainlp/thaitts-onnx
pythainlp
2024-01-24T06:11:35Z
0
0
null
[ "onnx", "th", "license:apache-2.0", "region:us" ]
null
2024-01-24T06:08:57Z
--- license: apache-2.0 language: - th --- # thaitts-onnx Thai Text-to-speech by ONNX runtime See model: [https://github.com/PyThaiNLP/thaitts-onnx](https://github.com/PyThaiNLP/thaitts-onnx)
brittlewis12/Snorkel-Mistral-PairRM-DPO-GGUF
brittlewis12
2024-01-24T06:02:16Z
34
5
null
[ "gguf", "text-generation", "en", "dataset:snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:snorkelai/Snorkel-Mistral-PairRM-DPO", "base_model:quantized:snorkelai/Snorkel-Mistral-PairRM-DPO", "license:apache-2.0", "region:us", "conversational" ]
text-generation
2024-01-23T21:17:20Z
--- base_model: snorkelai/Snorkel-Mistral-PairRM-DPO datasets: - snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset - HuggingFaceH4/ultrafeedback_binarized license: apache-2.0 language: - en model_creator: snorkelai model_name: Snorkel-Mistral-PairRM-DPO model_type: mistral inference: false pipeline_tag: text-generation prompt_template: | <|im_start|>system {{system_message}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant quantized_by: brittlewis12 --- # Snorkel-Mistral-PairRM-DPO GGUF Original model: [Snorkel-Mistral-PairRM-DPO](https://huggingface.co/snorkelai/Snorkel-Mistral-PairRM-DPO) Model creator: [Snorkel AI](https://huggingface.co/snorkelai) This repo contains GGUF format model files for Snorkel AI’s Snorkel-Mistral-PairRM-DPO. > With this demonstration, we focus on the general approach to alignment. Thus, we use a general-purpose reward model - the performant PairRM model. We use the Mistral-7B-Instruct-v0.2 model as our base LLM. ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Converted using llama.cpp b1960 ([26d6076](https://github.com/ggerganov/llama.cpp/commits/26d607608d794efa56df3bdb6043a2f94c1d632c)) ### Prompt template: ChatML ``` <|im_start|>system {{system_message}}<|im_end|> <|im_start|>user {{prompt}}<|im_end|> <|im_start|>assistant ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluations: > On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/): > - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**. > > After applying the above methodology: > - This model scored **30.22** - ranked 3rd and the highest for an open-source base model at the time of publication.
dhruvmakwana/dummy
dhruvmakwana
2024-01-24T05:57:37Z
90
0
transformers
[ "transformers", "safetensors", "camembert", "fill-mask", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-24T05:41:22Z
--- license: mit --- This is dummy model trying to learn huggingface hub
HexawareTech/phi2-base-model
HexawareTech
2024-01-24T05:48:53Z
36
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T05:45:46Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HexawareTech/phi-base-model
HexawareTech
2024-01-24T05:44:49Z
5
0
transformers
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-23T12:05:34Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MoulikBansal/phi-1_5-new-fine-tuned
MoulikBansal
2024-01-24T05:32:59Z
4
0
peft
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "base_model:adapter:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-01-23T15:18:28Z
--- license: mit library_name: peft tags: - generated_from_trainer base_model: microsoft/phi-1_5 model-index: - name: phi-1_5-new-fine-tuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1_5-new-fine-tuned This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 1000 ### Training results ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
sowji1219/MyFirsModel
sowji1219
2024-01-24T05:27:29Z
0
0
null
[ "arxiv:1910.09700", "region:us" ]
null
2024-01-24T05:25:47Z
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
shuyuej/metamath_lora_llama2_7b_3_epoch
shuyuej
2024-01-24T05:11:44Z
0
1
null
[ "license:apache-2.0", "model-index", "region:us" ]
null
2023-12-24T01:16:46Z
--- model-index: - name: MetaMath-LoRA-LLaMA-7B results: - task: type: text-generation dataset: name: meta-math/MetaMathQA type: meta-math/MetaMathQA metrics: - name: Accuracy (zero-shot) type: Accuracy (zero-shot) value: 0.641 verified: true source: name: Arithmetic Reasoning on GSM8K url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k license: apache-2.0 --- # Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA Fine-tune for three epochs ## Result: **Reload the saved adapter**: Invalid output length: 4, Testing length: 1319, **Accuracy: 0.641** ## Comparison The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs. **Note**: The LoRA adapter is being used for future research purposes. ## Deployment ```python # Load the Pre-trained LoRA Adapter model.load_adapter("shuyuej/metamath_lora_llama2_7b_3_epoch") model.enable_adapters() ```
shuyuej/metamath_lora_llama2_7b_5_epoch
shuyuej
2024-01-24T05:11:15Z
0
1
null
[ "license:apache-2.0", "model-index", "region:us" ]
null
2023-12-26T03:43:47Z
--- model-index: - name: MetaMath-LoRA-LLaMA-7B results: - task: type: text-generation dataset: name: meta-math/MetaMathQA type: meta-math/MetaMathQA metrics: - name: Accuracy (zero-shot) type: Accuracy (zero-shot) value: null verified: true source: name: Arithmetic Reasoning on GSM8K url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k license: apache-2.0 --- # Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA Fine-tune for 4.66 epochs ## Result: **Reload the saved adapter**: Invalid output length:, Testing length: 1319, **Accuracy:** ## Comparison The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs. **Note**: The LoRA adapter is being used for future research purposes. ## Deployment ```python # Load the Pre-trained LoRA Adapter model.load_adapter("shuyuej/metamath_lora_llama2_7b_5_epoch") model.enable_adapters() ```
shuyuej/metamath_lora_qkv_llama2_7b
shuyuej
2024-01-24T05:08:07Z
0
1
null
[ "license:apache-2.0", "model-index", "region:us" ]
null
2023-12-31T05:05:12Z
--- model-index: - name: MetaMath-LoRA-LLaMA-7B results: - task: type: text-generation dataset: name: meta-math/MetaMathQA type: meta-math/MetaMathQA metrics: - name: Accuracy (zero-shot) type: Accuracy (zero-shot) value: 0.58 verified: true source: name: Arithmetic Reasoning on GSM8K url: https://paperswithcode.com/sota/arithmetic-reasoning-on-gsm8k license: apache-2.0 --- # Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA Fine-tune for one epoch ## Result: **After the pre-training**: Invalid output length: 7, Testing length: 1319 , **Accuracy: 0.580** ## Comparison The official report **accuracy is 0.665** by fine-tuning the whole LLaMA 2 7B model for 3 epochs. **Note**: The LoRA adapter is being used for future research purposes. # 🚀 Adapter Usage ```python # Load the Pre-trained LoRA Adapter model.load_adapter("shuyuej/metamath_lora_qkv_llama2_7b") model.enable_adapters() ```
h2m/mhm-7b-v1.3-DPO-1
h2m
2024-01-24T05:03:51Z
1,338
1
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T05:09:38Z
--- license: apache-2.0 language: - en --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/ORVjYrpzyfKfP4ByOQnpQ.jpeg) A DPO fine tuned [mhm-7b-v1.3](https://huggingface.co/h2m/mhm-7b-v1.3) on [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) Based upon mistral. Created using [dare_ties](https://github.com/cg123/mergekit) and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result. Just an experiment.
h2m/mhm-7b-v1.3
h2m
2024-01-24T05:03:44Z
1,376
2
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "moe", "merge", "conversational", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-14T17:48:34Z
--- tags: - moe - merge license: apache-2.0 --- ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/ey84O7VrsOnsE7Ra8prgH.jpeg) # mhm-7-3 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Merged model based on mistral. created using dare_ties and models from top of openllm leaderboard. Mixed 7 models into 1. 3 times merging. Just an experiment.
dominic5/Project5_V3_Mistral7b_V2.1
dominic5
2024-01-24T04:56:53Z
1
0
peft
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-01-24T03:45:45Z
--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer datasets: - generator base_model: mistralai/Mistral-7B-v0.1 model-index: - name: Project5_V3_Mistral7b_V2.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Project5_V3_Mistral7b_V2.1 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 2.3036 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 128 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.8 | 1 | 2.3482 | | No log | 1.6 | 2 | 2.3439 | | No log | 2.4 | 3 | 2.3386 | | 2.3422 | 4.0 | 5 | 2.3264 | | 2.3422 | 4.8 | 6 | 2.3212 | | 2.3422 | 5.6 | 7 | 2.3166 | | 2.3422 | 6.4 | 8 | 2.3126 | | 2.3191 | 8.0 | 10 | 2.3072 | | 2.3191 | 8.8 | 11 | 2.3055 | | 2.3191 | 9.6 | 12 | 2.3045 | | 2.3191 | 10.4 | 13 | 2.3039 | | 2.3082 | 12.0 | 15 | 2.3036 | ### Framework versions - PEFT 0.7.1 - Transformers 4.37.0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.1
RiverTest/RiverMTG24
RiverTest
2024-01-24T04:53:44Z
5
0
transformers
[ "transformers", "safetensors", "mistral", "feature-extraction", "arxiv:1910.09700", "text-generation-inference", "endpoints_compatible", "region:us" ]
feature-extraction
2024-01-24T04:42:12Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-6.0bpw-h6-exl2
LoneStriker
2024-01-24T04:46:01Z
6
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T04:42:33Z
--- language: - zh - en - fr - de - ja - ko - it - ru - fi pipeline_tag: text-generation inference: false library_name: transformers license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL) ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
svidhani/mistral-7B-finetuned-alpaca
svidhani
2024-01-24T04:45:03Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "base_model:adapter:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-01-24T04:32:18Z
--- library_name: peft base_model: mistralai/Mistral-7B-v0.1 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-5.0bpw-h6-exl2
LoneStriker
2024-01-24T04:42:30Z
8
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T04:39:30Z
--- language: - zh - en - fr - de - ja - ko - it - ru - fi pipeline_tag: text-generation inference: false library_name: transformers license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL) ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
LoneStriker/openbuddy-deepseek-10b-v17.1-4k-4.0bpw-h6-exl2
LoneStriker
2024-01-24T04:39:27Z
7
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "zh", "en", "fr", "de", "ja", "ko", "it", "ru", "fi", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
2024-01-24T04:36:54Z
--- language: - zh - en - fr - de - ja - ko - it - ru - fi pipeline_tag: text-generation inference: false library_name: transformers license: other license_name: deepseek license_link: https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL --- # OpenBuddy - Open Multilingual Chatbot GitHub and Usage Guide: [https://github.com/OpenBuddy/OpenBuddy](https://github.com/OpenBuddy/OpenBuddy) Website and Demo: [https://openbuddy.ai](https://openbuddy.ai) Evaluation result of this model: [Evaluation.txt](Evaluation.txt) ![Demo](https://raw.githubusercontent.com/OpenBuddy/OpenBuddy/main/media/demo.png) # Copyright Notice Base model: https://huggingface.co/deepseek-ai/deepseek-llm-7b-base License: [deepseek](https://github.com/deepseek-ai/DeepSeek-LLM/blob/548a39bdd03986297ea4e233a8b7676edd6bec3e/LICENSE-MODEL) ## Disclaimer All OpenBuddy models have inherent limitations and may potentially produce outputs that are erroneous, harmful, offensive, or otherwise undesirable. Users should not use these models in critical or high-stakes situations that may lead to personal injury, property damage, or significant losses. Examples of such scenarios include, but are not limited to, the medical field, controlling software and hardware systems that may cause harm, and making important financial or legal decisions. OpenBuddy is provided "as-is" without any warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors, contributors, or copyright holders be liable for any claim, damages, or other liabilities, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software. By using OpenBuddy, you agree to these terms and conditions, and acknowledge that you understand the potential risks associated with its use. You also agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities arising from your use of OpenBuddy. ## 免责声明 所有OpenBuddy模型均存在固有的局限性,可能产生错误的、有害的、冒犯性的或其他不良的输出。用户在关键或高风险场景中应谨慎行事,不要使用这些模型,以免导致人身伤害、财产损失或重大损失。此类场景的例子包括但不限于医疗领域、可能导致伤害的软硬件系统的控制以及进行重要的财务或法律决策。 OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保证,包括但不限于适销性、特定目的的适用性和非侵权的暗示保证。在任何情况下,作者、贡献者或版权所有者均不对因软件或使用或其他软件交易而产生的任何索赔、损害赔偿或其他责任(无论是合同、侵权还是其他原因)承担责任。 使用OpenBuddy即表示您同意这些条款和条件,并承认您了解其使用可能带来的潜在风险。您还同意赔偿并使作者、贡献者和版权所有者免受因您使用OpenBuddy而产生的任何索赔、损害赔偿或责任的影响。
Gou1839/Live-Door-3Line-Summary
Gou1839
2024-01-24T04:38:46Z
90
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2024-01-24T04:28:00Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tvjoseph/GenerAd-AI
tvjoseph
2024-01-24T04:38:19Z
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-01-24T04:38:17Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
genne/kiwi_solar_merge_ties2_dpo
genne
2024-01-24T04:32:35Z
103
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "ko", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-15T23:23:32Z
--- license: apache-2.0 language: - ko ---
Megalino111/PixelCopter
Megalino111
2024-01-24T04:28:34Z
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
2024-01-24T04:28:02Z
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 52.40 +/- 37.72 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dsrestrepo/BERT_Lab_Values_10B_no_lab_id_no_repetition
dsrestrepo
2024-01-24T04:15:00Z
91
0
transformers
[ "transformers", "safetensors", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2024-01-24T04:09:31Z
# Model Details #### Model Name: NumericBERT #### Model Type: Transformer #### Architecture: BERT #### Training Method: Masked Language Modeling (MLM) #### Training Data: MIMIC IV Lab values data #### Training Hyperparameters: - **Optimizer:** AdamW - **Learning Rate:** 5e-5 - **Masking Rate:** 20% - **Tokenization:** Custom numeric-to-text mapping using the TextEncoder class ### Text Encoding Process **Overview:** Non-negative integers are converted into uppercase letter-based representations, allowing numerical values to be expressed as sequences of letters. **Normalization and Binning:** - **Method:** Log normalization and splitting into 10 bins. - **Representation:** Each bin is represented by a letter (A-J). ### Token Construction: - **Format:** `<<lab_value_bin>>` - **Example:** For a lab value with a normalized value in bin 'C', the token might be `C`. - **Columns Used:** 'Bic', 'Crt', 'Pot', 'Sod', 'Ure', 'Hgb', 'Plt', 'Wbc'. ### Training Data Preprocessing - **Column Selection:** Numerical values from selected lab values. - **Text Encoding:** Numeric values are encoded into text using the process described above. - **Masking:** 20% of the data is randomly masked during training. ### Model Output - **Description:** Outputs predictions for masked values during training. - **Format:** Contains the encoded text representing the predicted lab values. ### Limitations and Considerations - **Numeric Data Representation:** The custom text representation may have limitations in capturing the intricacies of the original numeric data. - **Training Data Source:** Performance may be influenced by the characteristics and biases inherent in the MIMIC IV dataset. - **Generalizability:** The model's effectiveness outside the context of the training dataset is not guaranteed. ### Contact Information - **Email:** [email protected] - **Name:** David Restrepo - **Affiliation:** MIT Critical Data - MIT
LN1996/peft-qlora-run1
LN1996
2024-01-24T04:09:30Z
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:microsoft/phi-2", "base_model:adapter:microsoft/phi-2", "region:us" ]
null
2024-01-24T04:08:58Z
--- library_name: peft base_model: microsoft/phi-2 --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
Ont/Marcoroni-13B
Ont
2024-01-24T04:09:02Z
24
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "license:cc-by-nc-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-08T17:08:24Z
--- license: cc-by-nc-4.0 datasets: - Open-Orca/OpenOrca language: - en pipeline_tag: text-generation --- # Marcoroni-13B - Safetensors A conversion of the original model [AIDC-ai-business/Marcoroni-13B] to safetensors format. # Marcoroni-13B # Model Details * **Trained by**: trained by AIDC AI-Business. * **Model type:** **Marcoroni-13B** is an auto-regressive language model based on the Llama 2 transformer architecture. * **Language(s)**: English * **License for Marcoroni-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) # Prompting ## Prompt Template for alpaca style ``` ### Instruction: <prompt> (without the <>) ### Response: ``` # Evaluation Results ([Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)) | Metric | Value | |-----------------------|-------| | Avg. | 65.76 | | ARC (25-shot) | 62.46 | | HellaSwag (10-shot) | 83.27 | | MMLU (5-shot) | 59.63 | | TruthfulQA (0-shot) | 57.7 |
stablediffusionapi/ceshi
stablediffusionapi
2024-01-24T03:58:44Z
25
1
diffusers
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
text-to-image
2024-01-24T03:56:45Z
--- license: creativeml-openrail-m tags: - modelslab.com - stable-diffusion-api - text-to-image - ultra-realistic pinned: true --- # ceshi API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/8399162561706068531.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "ceshi" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/ceshi) Model link: [View model](https://modelslab.com/models/ceshi) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "ceshi", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
machinelearningzuu/automatic-speech-recognition
machinelearningzuu
2024-01-24T03:50:01Z
0
1
adapter-transformers
[ "adapter-transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "zh", "ja", "en", "dataset:fka/awesome-chatgpt-prompts", "dataset:wikimedia/wikipedia", "dataset:unalignment/toxic-dpo-v0.1", "dataset:OpenAssistant/oasst2", "dataset:m-a-p/COIG-CQIA", "arxiv:1910.09700", "license:mit", "region:us" ]
automatic-speech-recognition
2022-06-11T09:20:53Z
--- license: mit datasets: - fka/awesome-chatgpt-prompts - wikimedia/wikipedia - unalignment/toxic-dpo-v0.1 - OpenAssistant/oasst2 - m-a-p/COIG-CQIA language: - zh - ja - en metrics: - accuracy - code_eval - character library_name: adapter-transformers pipeline_tag: automatic-speech-recognition --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gptdeutsch-com/chatgptdeutsch
gptdeutsch-com
2024-01-24T03:49:24Z
0
1
null
[ "license:other", "region:us" ]
null
2024-01-24T03:49:24Z
--- license: other license_name: gptdeutsch license_link: LICENSE ---
simpragma/breeze-listen-dsw-base-hi
simpragma
2024-01-24T03:39:43Z
75
1
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "hi", "dataset:mozilla-foundation/common_voice_16_0", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2024-01-23T10:29:32Z
--- language: - hi license: apache-2.0 base_model: openai/whisper-base tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_16_0 metrics: - wer model-index: - name: Breeze DSW Hindi - base results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_16_0 hi type: mozilla-foundation/common_voice_16_0 config: hi split: test args: hi metrics: - name: Wer type: wer value: 28.50294181738941 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Breeze DSW Hindi - base This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the mozilla-foundation/common_voice_16_0 hi dataset. It achieves the following results on the evaluation set: - Loss: 0.5205 - Wer: 28.5029 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.553 | 0.1 | 100 | 0.6445 | 39.4988 | | 0.3683 | 1.08 | 200 | 0.5342 | 33.0660 | | 0.2855 | 2.07 | 300 | 0.4983 | 31.4251 | | 0.2233 | 3.06 | 400 | 0.4868 | 30.1547 | | 0.1832 | 4.04 | 500 | 0.4783 | 28.9540 | | 0.1431 | 5.03 | 600 | 0.4902 | 29.1828 | | 0.0972 | 6.01 | 700 | 0.5049 | 28.6380 | | 0.0715 | 6.11 | 800 | 0.5205 | 28.5029 | | 0.0579 | 7.09 | 900 | 0.5366 | 28.9475 | | 0.0519 | 8.08 | 1000 | 0.5381 | 28.7949 | ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.2.dev0 - Tokenizers 0.15.0
hfl/chinese-alpaca-2-13b-gguf
hfl
2024-01-24T03:33:21Z
220
10
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-16T05:18:19Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-13B-GGUF This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-13B**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 13.7636 +/- 0.19446 | 20.6803 +/- 0.31594 | | Q3_K | 9.5388 +/- 0.13078 | 9.1016 +/- 0.12565 | | Q4_0 | 9.1694 +/- 0.12668 | - | | Q4_K | 8.6633 +/- 0.11957 | 8.6377 +/- 0.11932 | | Q5_0 | 8.6745 +/- 0.12020 | - | | Q5_K | 8.5161 +/- 0.11796 | 8.5210 +/- 0.11803 | | Q6_K | 8.4943 +/- 0.11759 | 8.5011 +/- 0.11775 | | Q8_0 | 8.4595 +/- 0.11718 | - | | F16 | 8.4550 +/- 0.11713 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-13b Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
annabear2357/distilbert-base-uncased-finetuned-emotion
annabear2357
2024-01-24T03:33:06Z
89
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2024-01-24T03:28:56Z
--- license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9245781463389335 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2180 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8026 | 1.0 | 250 | 0.3135 | 0.9035 | 0.9010 | | 0.2473 | 2.0 | 500 | 0.2180 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.14.0
faridkarimli/fgk-chatbot-wp
faridkarimli
2024-01-24T03:31:15Z
46
0
transformers
[ "transformers", "tf", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilbert/distilgpt2", "base_model:finetune:distilbert/distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T03:00:05Z
--- license: apache-2.0 base_model: distilgpt2 tags: - generated_from_keras_callback model-index: - name: faridkarimli/fgk-chatbot-wp results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # faridkarimli/fgk-chatbot-wp This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 5.4789 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 5.4789 | 0 | ### Framework versions - Transformers 4.35.2 - TensorFlow 2.15.0 - Datasets 2.16.1 - Tokenizers 0.15.0
raguirre453525/frieren
raguirre453525
2024-01-24T03:28:32Z
0
0
null
[ "region:us" ]
null
2024-01-24T03:26:59Z
https://civitai.com/models/217900/frieren-sousou-no-frieren-sdxl
hfl/chinese-llama-2-13b-16k-gguf
hfl
2024-01-24T03:28:08Z
148
1
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-16T06:34:38Z
--- license: apache-2.0 language: - zh - en --- # Chinese-LLaMA-2-13B-16K-GGUF This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-LLaMA-2-13B-16K**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 11.8958 +/- 0.20739 | 13.0017 +/- 0.23003 | | Q3_K | 9.7130 +/- 0.17037 | 9.3443 +/- 0.16582 | | Q4_0 | 9.2002 +/- 0.16219 | - | | Q4_K | 9.0055 +/- 0.15918 | 8.9848 +/- 0.15908 | | Q5_0 | 8.8441 +/- 0.15690 | - | | Q5_K | 8.8999 +/- 0.15751 | 8.8983 +/- 0.15753 | | Q6_K | 8.8944 +/- 0.15776 | 8.8833 +/- 0.15760 | | Q8_0 | 8.8745 +/- 0.15745 | - | | F16 | 8.8687 +/- 0.15729 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For Hugging Face version, please see: https://huggingface.co/hfl/chinese-llama-2-13b-16k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
kodonho/Momo-70b-DPO-mixed
kodonho
2024-01-24T03:18:51Z
62
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:moreh/MoMo-72B-lora-1.8.4-DPO", "base_model:merge:moreh/MoMo-72B-lora-1.8.4-DPO", "base_model:moreh/MoMo-72B-lora-1.8.6-DPO", "base_model:merge:moreh/MoMo-72B-lora-1.8.6-DPO", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-17T05:31:31Z
--- license: mit tags: - mergekit - merge base_model: - moreh/MoMo-70B-lora-1.8.6-DPO - moreh/MoMo-70B-lora-1.8.4-DPO --- # MoMo-70B-lora-1.8.6-DPO based model with gradient slerp This is an English mixed Model based on * [moreh/MoMo-70B-lora-1.8.6-DPO] gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "kodonho/kodonho/Momo-70b-DPO-mixed" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ```
bartowski/Orca-Hermes-7B-slerp-exl2
bartowski
2024-01-24T03:14:11Z
0
1
null
[ "merge", "mergekit", "Open-Orca/Mistral-7B-OpenOrca", "teknium/OpenHermes-2.5-Mistral-7B", "text-generation", "license:apache-2.0", "region:us" ]
text-generation
2024-01-24T02:58:10Z
--- license: apache-2.0 tags: - merge - mergekit - Open-Orca/Mistral-7B-OpenOrca - teknium/OpenHermes-2.5-Mistral-7B quantized_by: bartowski pipeline_tag: text-generation --- ## Exllama v2 Quantizations of Orca-Hermes-7B-slerp Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.12">turboderp's ExLlamaV2 v0.0.12</a> for quantization. # The "main" branch only contains the measurement.json, download one of the other branches for the model (see below) Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions. Original model: https://huggingface.co/cris177/Orca-Hermes-7B-slerp | Branch | Bits | lm_head bits | Size | Description | | ----- | ---- | ------- | ------ | ------------ | | [8_0](https://huggingface.co/Bartowski/Orca-Hermes-7B-slerp-exl2/tree/8_0) | 8.0 | 8.0 | 9.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. | | [6_5](https://huggingface.co/Bartowski/Orca-Hermes-7B-slerp-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. | | [5_0](https://huggingface.co/Bartowski/Orca-Hermes-7B-slerp-exl2/tree/5_0) | 5.0 | 6.0 | 7.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. | | [4_25](https://huggingface.co/Bartowski/Orca-Hermes-7B-slerp-exl2/tree/4_25) | 4.25 | 6.0 | 6.7 GB | GPTQ equivalent bits per weight, slightly higher quality. | | [3_5](https://huggingface.co/Bartowski/Orca-Hermes-7B-slerp-exl2/tree/3_5) | 3.5 | 6.0 | 6.1 GB | Lower quality, only use if you have to. | All VRAM requirements estimated from 16k context. For 32k context add ~2 GB. ## Download instructions With git: ```shell git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Orca-Hermes-7B-slerp-exl2 Orca-Hermes-7B-slerp-exl2-6_5 ``` With huggingface hub (credit to TheBloke for instructions): ```shell pip3 install huggingface-hub ``` To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Orca-Hermes-7B-slerp-exl2`: ```shell mkdir Orca-Hermes-7B-slerp-exl2 huggingface-cli download bartowski/Orca-Hermes-7B-slerp-exl2 --local-dir Orca-Hermes-7B-slerp-exl2 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: Linux: ```shell mkdir Orca-Hermes-7B-slerp-exl2-6_5 huggingface-cli download bartowski/Orca-Hermes-7B-slerp-exl2 --revision 6_5 --local-dir Orca-Hermes-7B-slerp-exl2-6_5 --local-dir-use-symlinks False ``` Windows (which apparently doesn't like _ in folders sometimes?): ```shell mkdir Orca-Hermes-7B-slerp-exl2-6.5 huggingface-cli download bartowski/Orca-Hermes-7B-slerp-exl2 --revision 6_5 --local-dir Orca-Hermes-7B-slerp-exl2-6.5 --local-dir-use-symlinks False ```
hfl/chinese-alpaca-2-7b-64k-gguf
hfl
2024-01-24T03:03:15Z
234
5
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-23T07:12:33Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-7B-64K This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-Alpaca-2-7B-64K**, which is tuned on Chinese-Alpaca-2-7B with **YaRN method**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 9.8201 +/- 0.13298 | 10.3057 +/- 0.14197 | | Q3_K | 8.4435 +/- 0.11467 | 8.3556 +/- 0.11316 | | Q4_0 | 8.3573 +/- 0.11496 | - | | Q4_K | 8.0558 +/- 0.10948 | 8.0557 +/- 0.10964 | | Q5_0 | 8.0220 +/- 0.10954 | - | | Q5_K | 7.9388 +/- 0.10802 | 7.9440 +/- 0.10815 | | Q6_K | 7.9267 +/- 0.10792 | 7.9126 +/- 0.10775 | | Q8_0 | 7.9117 +/- 0.10773 | - | | F16 | 7.9124 +/- 0.10780 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-64k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
hfl/chinese-alpaca-2-7b-16k-gguf
hfl
2024-01-24T03:01:24Z
196
1
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-16T06:33:48Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-7B-16K-GGUF This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-7B-16K**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 11.8181 +/- 0.16402 | 13.6285 +/- 0.19294 | | Q3_K | 9.5596 +/- 0.13369 | 9.3748 +/- 0.13108 | | Q4_0 | 9.6480 +/- 0.13459 | - | | Q4_K | 8.9622 +/- 0.12507 | 8.9229 +/- 0.12467 | | Q5_0 | 8.9274 +/- 0.12485 | - | | Q5_K | 8.8370 +/- 0.12353 | 8.8221 +/- 0.12348 | | Q6_K | 8.7830 +/- 0.12290 | 8.7695 +/- 0.12260 | | Q8_0 | 8.7644 +/- 0.12261 | - | | F16 | 8.7676 +/- 0.12268 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-16k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
hfl/chinese-alpaca-2-7b-rlhf-gguf
hfl
2024-01-24T02:59:29Z
346
5
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-25T07:20:00Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-7B-RLHF-GGUF This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-Alpaca-2-7B-RLHF**, which is tuned on Chinese-Alpaca-2-7B with RLHF using DeepSpeed-Chat. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 10.5211 +/- 0.14139 | 11.9331 +/- 0.16168 | | Q3_K | 8.9748 +/- 0.12043 | 8.8238 +/- 0.11850 | | Q4_0 | 8.7843 +/- 0.11854 | - | | Q4_K | 8.4643 +/- 0.11341 | 8.4226 +/- 0.11302 | | Q5_0 | 8.4563 +/- 0.11353 | - | | Q5_K | 8.3722 +/- 0.11236 | 8.3336 +/- 0.11192 | | Q6_K | 8.3207 +/- 0.11184 | 8.3047 +/- 0.11159 | | Q8_0 | 8.3100 +/- 0.11173 | - | | F16 | 8.3112 +/- 0.11173 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-alpaca-2-7b-rlhf Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
hfl/chinese-alpaca-2-1.3b-gguf
hfl
2024-01-24T02:54:44Z
412
6
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-11-16T05:21:50Z
--- license: apache-2.0 language: - zh - en --- # Chinese-Alpaca-2-1.3B-GGUF This repository contains the GGUF-v3 models (llama.cpp compatible) for **Chinese-Alpaca-2-1.3B**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 19.9339 +/- 0.29752 | 18.8935 +/- 0.28558 | | Q3_K | 17.2487 +/- 0.27668 | 17.2950 +/- 0.27994 | | Q4_0 | 16.1358 +/- 0.25091 | - | | Q4_K | 16.4583 +/- 0.26453 | 16.2688 +/- 0.26216 | | Q4_0 | 15.9068 +/- 0.25545 | - | | Q5_K | 15.7547 +/- 0.25207 | 16.0190 +/- 0.25782 | | Q6_K | 15.8166 +/- 0.25359 | 15.7357 +/- 0.25210 | | Q8_0 | 15.7972 +/- 0.25384 | - | | F16 | 15.8098 +/- 0.25403 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For Hugging Face version, please see: https://huggingface.co/hfl/chinese-alpaca-2-1.3b Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
hfl/chinese-llama-2-7b-64k-gguf
hfl
2024-01-24T02:53:35Z
208
2
null
[ "gguf", "zh", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2023-12-21T05:45:22Z
--- license: apache-2.0 language: - zh - en --- # Chinese-LLaMA-2-7B-64K This repository contains GGUF-v3 version (llama.cpp compatible) of **Chinese-LLaMA-2-7B-64K**, which is tuned on Chinese-LLaMA-2-7B with **YaRN method**. ## Performance Metric: PPL, lower is better | Quant | original | imatrix (`-im`) | |-----|------|------| | Q2_K | 11.5424 +/- 0.24106 | 12.1599 +/- 0.26050 | | Q3_K | 10.0152 +/- 0.21296 | 9.9269 +/- 0.21335 | | Q4_0 | 9.7500 +/- 0.20872 | - | | Q4_K | 9.7687 +/- 0.21133 | 9.7239 +/- 0.20999 | | Q5_0 | 9.4647 +/- 0.20280 | - | | Q5_K | 9.6229 +/- 0.20829 | 9.5673 +/- 0.20675 | | Q6_K | 9.5996 +/- 0.20816 | 9.5753 +/- 0.20734 | | Q8_0 | 9.4078 +/- 0.20378 | - | | F16 | 9.5750 +/- 0.20735 | - | *The model with `-im` suffix is generated with important matrix, which has generally better performance (not always though).* ## Others For full model in HuggingFace format, please see: https://huggingface.co/hfl/chinese-llama-2-7b-64k Please refer to [https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/) for more details.
Sesgaro/picin_assist
Sesgaro
2024-01-24T02:41:20Z
4
0
transformers
[ "transformers", "safetensors", "falcon", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2024-01-24T02:35:47Z
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
tinywell/ppo-Huggy
tinywell
2024-01-24T02:32:28Z
2
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
2024-01-24T02:32:23Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: tinywell/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
justmalhar/phi2-2.7B-dork-finetune
justmalhar
2024-01-24T02:27:57Z
0
0
transformers
[ "transformers", "safetensors", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-01-24T02:21:48Z
--- license: mit library_name: transformers --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64785512256b62e2198f065b/XyI0ZYhoP-56yfktK4qO4.png) #### Sample Dataset: [link](https://www.kaggle.com/datasets/vaclavhalama/reddit-questions-and-answers) #### Prompt Format: ``` ### Question: How can you prove that you are real, and not just a programme in some future computer running for the amusement of others? ### Answer: I am alive. You're dead! And if it's possible to be born again then so is me :) ```
hwang2006/bert-finetuned-squad
hwang2006
2024-01-24T02:15:10Z
93
0
transformers
[ "transformers", "safetensors", "bert", "question-answering", "generated_from_trainer", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2024-01-24T01:28:15Z
--- license: apache-2.0 base_model: bert-base-cased tags: - generated_from_trainer model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.0.1+cu117 - Datasets 2.16.1 - Tokenizers 0.15.0