| modelId
				 string | author
				 string | last_modified
				 timestamp[us, tz=UTC] | downloads
				 int64 | likes
				 int64 | library_name
				 string | tags
				 list | pipeline_tag
				 string | createdAt
				 timestamp[us, tz=UTC] | card
				 string | 
|---|---|---|---|---|---|---|---|---|---|
| 
	yeniceriSGK/falcon-1b-pibrain-v3 | 
	yeniceriSGK | 2024-02-13T13:36:12Z | 3 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "falcon",
  "text-generation",
  "custom_code",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "4-bit",
  "bitsandbytes",
  "region:us"
] | 
	text-generation | 2024-02-13T13:36:08Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	LarryAIDraw/Fubuki | 
	LarryAIDraw | 2024-02-13T13:34:39Z | 0 | 1 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T13:25:46Z | 
	---
license: creativeml-openrail-m
---
https://civitai.com/models/301865/fubuki-hellish-blizzard-one-punch-man | 
| 
	LarryAIDraw/HighSchoolFleet_MunetaniMashimo | 
	LarryAIDraw | 2024-02-13T13:34:21Z | 0 | 0 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T13:25:01Z | 
	---
license: creativeml-openrail-m
---
https://civitai.com/models/302264/munetani-mashimo-or-high-school-fleet | 
| 
	LarryAIDraw/morishimaharuka-nvwls-v1 | 
	LarryAIDraw | 2024-02-13T13:33:31Z | 0 | 0 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T13:22:45Z | 
	---
license: creativeml-openrail-m
---
https://civitai.com/models/303652/haruka-morishima-amagami-ss-lora | 
| 
	LarryAIDraw/privaty-nikke-richy-v2 | 
	LarryAIDraw | 2024-02-13T13:33:21Z | 0 | 0 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T13:22:04Z | 
	---
license: creativeml-openrail-m
---
https://civitai.com/models/104487/privaty-nikke-lora-or-4-outfits-cat-maid-dress-casual-and-default | 
| 
	ansilmbabl/cards-blt-swin-tiny-patch4-window7-224-finetuned-v2 | 
	ansilmbabl | 2024-02-13T13:26:42Z | 47 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "swin",
  "image-classification",
  "generated_from_trainer",
  "dataset:imagefolder",
  "base_model:microsoft/swin-tiny-patch4-window7-224",
  "base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	image-classification | 2024-02-13T10:32:42Z | 
	---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: cards-blt-swin-tiny-patch4-window7-224-finetuned-v2
  results:
  - task:
      name: Image Classification
      type: image-classification
    dataset:
      name: imagefolder
      type: imagefolder
      config: default
      split: test
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.5022222222222222
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cards-blt-swin-tiny-patch4-window7-224-finetuned-v2
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2162
- Accuracy: 0.5022
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4297        | 1.0   | 56   | 1.1976          | 0.4933   |
| 1.4078        | 1.99  | 112  | 1.1964          | 0.5011   |
| 1.417         | 2.99  | 168  | 1.2025          | 0.4961   |
| 1.4163        | 4.0   | 225  | 1.2295          | 0.4883   |
| 1.4318        | 5.0   | 281  | 1.2330          | 0.495    |
| 1.4383        | 5.99  | 337  | 1.2162          | 0.5022   |
| 1.4212        | 6.99  | 393  | 1.2634          | 0.4717   |
| 1.4346        | 8.0   | 450  | 1.3083          | 0.4689   |
| 1.419         | 9.0   | 506  | 1.2719          | 0.4806   |
| 1.4252        | 9.99  | 562  | 1.3048          | 0.4911   |
| 1.4522        | 10.99 | 618  | 1.2708          | 0.4794   |
| 1.3748        | 12.0  | 675  | 1.3720          | 0.4383   |
| 1.3966        | 13.0  | 731  | 1.3095          | 0.4594   |
| 1.4507        | 13.99 | 787  | 1.2430          | 0.485    |
| 1.4033        | 14.99 | 843  | 1.2728          | 0.4794   |
| 1.3972        | 16.0  | 900  | 1.2611          | 0.4883   |
| 1.4136        | 17.0  | 956  | 1.3166          | 0.45     |
| 1.3992        | 17.99 | 1012 | 1.3103          | 0.4856   |
| 1.3614        | 18.99 | 1068 | 1.3302          | 0.4422   |
| 1.3747        | 20.0  | 1125 | 1.2919          | 0.4856   |
| 1.3868        | 21.0  | 1181 | 1.3166          | 0.4728   |
| 1.3399        | 21.99 | 1237 | 1.3200          | 0.4672   |
| 1.3943        | 22.99 | 1293 | 1.2920          | 0.4811   |
| 1.3635        | 24.0  | 1350 | 1.3109          | 0.4833   |
| 1.3724        | 25.0  | 1406 | 1.3100          | 0.4644   |
| 1.3141        | 25.99 | 1462 | 1.3263          | 0.4978   |
| 1.3576        | 26.99 | 1518 | 1.3307          | 0.4772   |
| 1.3022        | 28.0  | 1575 | 1.3409          | 0.4978   |
| 1.2982        | 29.0  | 1631 | 1.3962          | 0.4583   |
| 1.2657        | 29.99 | 1687 | 1.3329          | 0.4817   |
| 1.3152        | 30.99 | 1743 | 1.2973          | 0.49     |
| 1.2924        | 32.0  | 1800 | 1.3159          | 0.4833   |
| 1.214         | 33.0  | 1856 | 1.3955          | 0.4833   |
| 1.2717        | 33.99 | 1912 | 1.4583          | 0.46     |
| 1.2692        | 34.99 | 1968 | 1.3504          | 0.4939   |
| 1.2127        | 36.0  | 2025 | 1.3784          | 0.4833   |
| 1.1956        | 37.0  | 2081 | 1.4184          | 0.4817   |
| 1.2408        | 37.99 | 2137 | 1.3849          | 0.4944   |
| 1.1699        | 38.99 | 2193 | 1.4298          | 0.4844   |
| 1.1727        | 40.0  | 2250 | 1.4331          | 0.4772   |
| 1.1485        | 41.0  | 2306 | 1.4597          | 0.4672   |
| 1.1668        | 41.99 | 2362 | 1.4429          | 0.4783   |
| 1.1881        | 42.99 | 2418 | 1.4555          | 0.4839   |
| 1.1204        | 44.0  | 2475 | 1.4648          | 0.4783   |
| 1.1523        | 45.0  | 2531 | 1.4744          | 0.4733   |
| 1.1206        | 45.99 | 2587 | 1.4792          | 0.4906   |
| 1.1135        | 46.99 | 2643 | 1.5009          | 0.4678   |
| 1.1227        | 48.0  | 2700 | 1.5480          | 0.4733   |
| 1.1017        | 49.0  | 2756 | 1.5907          | 0.4644   |
| 1.1601        | 49.99 | 2812 | 1.5136          | 0.47     |
| 1.1239        | 50.99 | 2868 | 1.5384          | 0.4789   |
| 1.09          | 52.0  | 2925 | 1.5716          | 0.4711   |
| 1.1023        | 53.0  | 2981 | 1.5736          | 0.4728   |
| 1.1038        | 53.99 | 3037 | 1.5919          | 0.4556   |
| 1.058         | 54.99 | 3093 | 1.5534          | 0.4772   |
| 1.0405        | 56.0  | 3150 | 1.5788          | 0.4717   |
| 1.0172        | 57.0  | 3206 | 1.5855          | 0.4767   |
| 1.0036        | 57.99 | 3262 | 1.6425          | 0.455    |
| 1.0124        | 58.99 | 3318 | 1.6039          | 0.4678   |
| 1.0647        | 60.0  | 3375 | 1.5891          | 0.4572   |
| 1.0143        | 61.0  | 3431 | 1.6265          | 0.4483   |
| 1.0051        | 61.99 | 3487 | 1.6208          | 0.4633   |
| 0.9571        | 62.99 | 3543 | 1.6874          | 0.4483   |
| 0.9838        | 64.0  | 3600 | 1.6778          | 0.4517   |
| 0.9995        | 65.0  | 3656 | 1.6248          | 0.4722   |
| 1.0374        | 65.99 | 3712 | 1.6645          | 0.4667   |
| 0.9483        | 66.99 | 3768 | 1.6307          | 0.4611   |
| 0.9825        | 68.0  | 3825 | 1.6662          | 0.4661   |
| 1.0023        | 69.0  | 3881 | 1.6650          | 0.46     |
| 0.9642        | 69.99 | 3937 | 1.6953          | 0.4494   |
| 0.9687        | 70.99 | 3993 | 1.7076          | 0.4661   |
| 0.9542        | 72.0  | 4050 | 1.7012          | 0.4656   |
| 0.9378        | 73.0  | 4106 | 1.7056          | 0.4533   |
| 0.9542        | 73.99 | 4162 | 1.7331          | 0.4572   |
| 0.9035        | 74.99 | 4218 | 1.7459          | 0.4417   |
| 0.9631        | 76.0  | 4275 | 1.7236          | 0.465    |
| 0.8759        | 77.0  | 4331 | 1.7294          | 0.455    |
| 0.9218        | 77.99 | 4387 | 1.7654          | 0.4578   |
| 0.9077        | 78.99 | 4443 | 1.7234          | 0.4594   |
| 0.8924        | 80.0  | 4500 | 1.7256          | 0.4683   |
| 0.9156        | 81.0  | 4556 | 1.7320          | 0.4678   |
| 0.806         | 81.99 | 4612 | 1.7348          | 0.4661   |
| 0.8863        | 82.99 | 4668 | 1.7514          | 0.4606   |
| 0.8698        | 84.0  | 4725 | 1.7484          | 0.4661   |
| 0.8623        | 85.0  | 4781 | 1.7420          | 0.4778   |
| 0.8643        | 85.99 | 4837 | 1.7636          | 0.4617   |
| 0.8914        | 86.99 | 4893 | 1.7552          | 0.465    |
| 0.837         | 88.0  | 4950 | 1.7552          | 0.4644   |
| 0.8217        | 89.0  | 5006 | 1.7532          | 0.4639   |
| 0.8601        | 89.99 | 5062 | 1.7447          | 0.4683   |
| 0.8293        | 90.99 | 5118 | 1.7622          | 0.4611   |
| 0.8301        | 92.0  | 5175 | 1.7616          | 0.4633   |
| 0.7752        | 93.0  | 5231 | 1.7585          | 0.4722   |
| 0.8533        | 93.99 | 5287 | 1.7842          | 0.4617   |
| 0.8156        | 94.99 | 5343 | 1.7837          | 0.4622   |
| 0.8094        | 96.0  | 5400 | 1.7896          | 0.4583   |
| 0.839         | 97.0  | 5456 | 1.7835          | 0.465    |
| 0.839         | 97.99 | 5512 | 1.7883          | 0.46     |
| 0.7763        | 98.99 | 5568 | 1.7838          | 0.4594   |
| 0.8186        | 99.56 | 5600 | 1.7837          | 0.4606   |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.17.0
- Tokenizers 0.15.2
 | 
| 
	SimplCup/Purplers | 
	SimplCup | 2024-02-13T13:08:35Z | 0 | 0 | null | 
	[
  "license:cc-by-nc-nd-4.0",
  "region:us"
] | null | 2024-02-13T13:08:06Z | 
	---
license: cc-by-nc-nd-4.0
---
 | 
| 
	hiig-ai-lab/simba-v01c | 
	hiig-ai-lab | 2024-02-13T13:06:23Z | 16 | 3 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "german",
  "deutsch",
  "simplification",
  "vereinfachung",
  "conversational",
  "de",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T09:27:22Z | 
	---
license: apache-2.0
language:
- de
pipeline_tag: text-generation
tags:
- german
- deutsch
- simplification
- vereinfachung
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
We fine-tuned the [jphme/em_german_leo_mistral](https://huggingface.co/jphme/em_german_leo_mistral) with a set of ca. 2000 newspaper articles which have been simplified by the Austrian Press Agency. 
Our aim was to have a model which can simplify German-language text.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Members of the [Public Interest AI research group](https://publicinterest.ai/), [HIIG Berlin](https://www.hiig.de/)
- **Model type:** simplification model, text generation
- **Language(s) (NLP):** German
- **License:** Apache 2.0
- **Finetuned from model:** jphme/em_german_leo_mistral
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/fhewett/simba
<!-- - **Paper [optional]:** [More Information Needed] -->
- **Project website:** https://publicinterest.ai/tool/simba
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts.
### Downstream Use
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified).
<!-- ### Out-of-Scope Use -->
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
As with most text generation models, the model sometimes produces information that is incorrect. 
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
A sample of the data used to train our model can be found [here](https://github.com/fhewett/apa-rst/tree/main/original_texts).
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
<!-- #### Speeds, Sizes, Times [optional]  -->
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
#### Summary
For now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation.
<!-- ## Citation [optional]
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]-->
## Model Card Contact
simba -at- hiig.de | 
| 
	toshi456/llava-jp-1.3b-v1.0-siglip-so400m-patch14-384 | 
	toshi456 | 2024-02-13T13:05:08Z | 57 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llava-jp",
  "text-generation",
  "vision",
  "image-captioning",
  "VQA",
  "image-to-text",
  "ja",
  "dataset:toshi456/LLaVA-CC3M-Pretrain-595K-JA",
  "dataset:turing-motors/LLaVA-Instruct-150K-JA",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	image-to-text | 2024-02-13T11:21:26Z | 
	---
license: cc-by-nc-4.0
datasets:
- toshi456/LLaVA-CC3M-Pretrain-595K-JA
- turing-motors/LLaVA-Instruct-150K-JA
language:
- ja
pipeline_tag: image-to-text
tags:
- vision
- image-captioning
- VQA
---
# LLaVA-JP Model Card
## Model detail
**Model type:**
LLaVA-JP is a vision-language model that can converse about input images.<br>
This model was trained by fine-tuning  [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) using [LLaVA](https://llava-vl.github.io/) method and [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) is used as Image Encoder.
**Training:**
This model was initially trained with the Vision Projector using [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) and STAIR Captions. <br>
In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome.
resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main
## How to use the model
**1. Download dependencies**
```
git clone https://github.com/tosiyuki/LLaVA-JP.git
```
**2. Inference**
```python
import requests
import torch
import transformers
from PIL import Image
from transformers.generation.streamers import TextStreamer
from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX
from llava.conversation import conv_templates, SeparatorStyle
from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM
from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments
from llava.train.dataset import tokenizer_image_token
if __name__ == "__main__":
    parser = transformers.HfArgumentParser(
        (ModelArguments, DataArguments, TrainingArguments))
    model_args, data_args, training_args = parser.parse_args_into_dataclasses()
    model_path = 'toshi456/llava-jp-1.3b-v1.0-siglip-so400m-patch14-384'
    device = "cuda" if torch.cuda.is_available() else "cpu"
    torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32
    model = LlavaGpt2ForCausalLM.from_pretrained(
        model_path, 
        low_cpu_mem_usage=True,
        use_safetensors=True,
        torch_dtype=torch_dtype,
        device_map=device,
    )
    tokenizer = transformers.AutoTokenizer.from_pretrained(
        model_path,
        model_max_length=1024,
        padding_side="right",
        use_fast=False,
    )
    model.eval()
    conv_mode = "v1"
    conv = conv_templates[conv_mode].copy()
    # image pre-process
    image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg"
    image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB')
    if device == "cuda":
        image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].half().cuda().to(torch_dtype)
    else:
        image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].to(torch_dtype)
    # create prompt
    # ユーザー: <image>\n{prompt}
    prompt = "猫の隣には何がありますか?"
    inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt
    conv.append_message(conv.roles[0], inp)
    conv.append_message(conv.roles[1], None)
    prompt = conv.get_prompt()
    input_ids = tokenizer_image_token(
        prompt, 
        tokenizer, 
        IMAGE_TOKEN_INDEX, 
        return_tensors='pt'
    ).unsqueeze(0)
    if device == "cuda":
        input_ids = input_ids.to(device)
    input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する
    stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
    keywords = [stop_str]
    streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0)
    # predict
    with torch.inference_mode():
        model.generate(
            inputs=input_ids,
            images=image_tensor,
            do_sample=True,
            temperature=0.01,
            top_p=1.0,
            max_new_tokens=256,
            streamer=streamer,
            use_cache=True,
        )
    """猫の隣にはノートパソコンがある。<EOD|LLM-jp>"""
```
## Training dataset
**Stage1 Pretrain**
- [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA)
- [Japanese STAIR Captions](http://captions.stair.center/)
**Stage2 Fine-tuning**
- [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA)
- [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa)
## Acknowledgement
- [LLaVA](https://llava-vl.github.io/)
- [LLM-jp](https://llm-jp.nii.ac.jp/)
## License
cc-by-nc-4.0 | 
| 
	Gordon119/TAT-openai-whisper-large-v3-Lora-ContinualTraining-epoch5-total5epoch | 
	Gordon119 | 2024-02-13T12:57:12Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-02T20:31:45Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	Himitsui/Kaiju-11B | 
	Himitsui | 2024-02-13T12:55:31Z | 151 | 14 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "en",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T12:33:38Z | 
	---
license: cc-by-nc-4.0
language:
- en
---
Included in this repo is the full precision model for Kaiju-11B
(ノ≧∀≦)ノ ‥…━━━━━━━━━━━━━★          |||   ╲/\╭[ ᴼᴼ ౪ ᴼᴼ]╮/\╱\
Hiya! This is an experiment using Gryphe's [MergeMonster](https://github.com/Gryphe/MergeMonster).
I decided to try and reduce what the community calls 'GPT-isms' or GPT Slop, Solar is a good model but does have fair share of positivity bias and 'slop' in roleplays. I used my friend [Sao](https://huggingface.co/Sao10K)'s models as bases as they are pretty popular, along with Kuromitsu and the popular Instruct-Uncensored tune.
Alpaca Format should be fine as it is universal, Vicuna Format should work too. Universal-Light preset in SillyTavern is pretty nice too. :)
💜 I hope this model may be useful to you 💜
***
Merge Details Below:
<details><summary>See Merge Config</summary>
  
```
-----------------------------------------------------------------------------------------------------
| Type | Phrase             | Context                  | Raw Prob*    | Used Prob**  | Change       |
-----------------------------------------------------------------------------------------------------
| BAD  | anticipation       | Her body quivers with    | 9.99850%     | 119.98%      | -54.02%      |
| BAD  | anticipation       | The atmosphere is thic.. | 8.82392%     | 105.89%      | -32.13%      |
| BAD  | unwavering         | Filled with an           | 0.09003%     | 1.08%        | -0.06%       |
| BAD  | determination      | Her eyes were filled w.. | 0.19863%     | 2.38%        | -0.26%       |
| BAD  | determination      | Her stubbornness only .. | 7.17110%     | 86.05%       | -39.86%      |
| BAD  | whisper            | Her voice barely above.. | 96.55492%    | 1158.66%     | -8.91%       |
| BAD  | spine              | shivers down her         | 85.57597%    | 1026.91%     | -66.19%      |
| BAD  | sends shivers      | The thrill of the act    | 0.00230%     | 0.03%        | -0.00%       |
| BAD  | ministrations      | She moans and twitches.. | 1.35264%     | 16.23%       | -10.49%      |
| BAD  | legs               | wraps her                | 2.45741%     | 29.49%       | -10.58%      |
| BAD  | imposing figure    | He had an                | 0.00356%     | 0.04%        | +0.00%       |
| BAD  | shared challenges  | Their bond strengthene.. | 0.10075%     | 1.21%        | -0.03%       |
| BAD  | bond               | forged a                 | 1.78930%     | 21.47%       | -9.07%       |
| BAD  | bond               | an unspoken              | 4.33001%     | 51.96%       | -28.17%      |
| BAD  | enhance our expe.. | I'm excited to see how   | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | sense of vulnera.. | create a                 | 0.00003%     | 0.00%        | -0.00%       |
| BAD  | dimensions of in.. | explore new              | 0.00047%     | 0.01%        | -0.00%       |
| BAD  | deepening our co.. | while                    | 0.00003%     | 0.00%        | -0.00%       |
| BAD  | shared experiences | through                  | 0.00469%     | 0.06%        | -0.00%       |
| BAD  | societal expecta.. | that transcend           | 0.00170%     | 0.02%        | -0.00%       |
| BAD  | conventional bou.. | that defy                | 0.03593%     | 0.43%        | +0.04%       |
| BAD  | conventional bou.. | and defy                 | 0.00410%     | 0.05%        | +0.01%       |
| BAD  | open communication | an environment           | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | emotional vulner.. | an environment           | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | heightens our co.. | touch and the anticipa.. | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | sensations you'r.. | I'm enjoying             | 0.00000%     | 0.00%        | -0.00%       |
| BAD  | is truly arousing  | attention to detail      | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | is truly arousing  | way you explore my body  | 0.00001%     | 0.00%        | +0.00%       |
| BAD  | challenge presen.. | my resolve unwavering .. | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | humble vessel      | surrendering to the ex.. | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | bond               | cherishing the unique    | 1.37498%     | 16.50%       | +1.21%       |
| BAD  | bond               | special                  | 0.05834%     | 0.70%        | +0.01%       |
| BAD  | grows stronger w.. | bond                     | 0.00000%     | 0.00%        | +0.00%       |
| BAD  | that cannot be b.. | bond                     | 0.00000%     | 0.00%        | -0.00%       |
| BAD  | becomes unbreaka.. | bond                     | 0.00000%     | 0.00%        | -0.00%       |
| BAD  | grew stronger wi.. | bond                     | 0.00000%     | 0.00%        | +0.00%       |
| GOOD | The apple is in .. | Question: If I'm in th.. | 78.38934%    | 78.39%       | -10.79%      |
------------------------------------------------------------------------------------------------------
| Totals                                               | 298.32%      | 2717.54%     | -269.30%     |
------------------------------------------------------------------------------------------------------
```
  
* = Unweighted, raw probability - ** = Probability after weight adjustments
```
-------- MERGE COMPOSITION ---------
Fimbulvetr-11B-v2-Test-14: 0.50
KuroMitsu-11B: 0.18
Fimbulvetr-10.7B-v1: 0.17
SOLAR-10.7B-Instruct-v1.0-uncensored: 0.10
Solstice-11B-v1: 0.05
```
</details><br> | 
| 
	iamhack/DH_DOOR_BOT | 
	iamhack | 2024-02-13T12:47:37Z | 148 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "hubert",
  "audio-classification",
  "generated_from_trainer",
  "dataset:audiofolder",
  "base_model:ntu-spml/distilhubert",
  "base_model:finetune:ntu-spml/distilhubert",
  "license:apache-2.0",
  "model-index",
  "endpoints_compatible",
  "region:us"
] | 
	audio-classification | 2024-02-13T09:51:26Z | 
	---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: DH_DOOR_BOT
  results:
  - task:
      name: Audio Classification
      type: audio-classification
    dataset:
      name: audiofolder
      type: audiofolder
      config: default
      split: train
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.956539391366933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DH_DOOR_BOT
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1345
- Accuracy: 0.9565
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: tpu
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2536        | 1.0   | 423  | 0.2130          | 0.9297   |
| 0.1807        | 2.0   | 847  | 0.1698          | 0.9438   |
| 0.1613        | 3.0   | 1270 | 0.1642          | 0.9457   |
| 0.1447        | 4.0   | 1694 | 0.1372          | 0.9561   |
| 0.1348        | 4.99  | 2115 | 0.1345          | 0.9565   |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.1
 | 
| 
	chtai/LHK_DPO_v1 | 
	chtai | 2024-02-13T12:40:22Z | 12 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "gguf",
  "mixtral",
  "text-generation",
  "en",
  "base_model:TomGrc/FusionNet_7Bx2_MoE_14B",
  "base_model:quantized:TomGrc/FusionNet_7Bx2_MoE_14B",
  "license:mit",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T11:55:59Z | 
	---
base_model: TomGrc/FusionNet_7Bx2_MoE_14B
model_creator: HanNayeoniee
model_name: LHK_DPO_v1
license: mit
language:
- en
---
# Description
This repo contains GGUF format model files for [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
 | 
| 
	pgajo/mdeberta_EW-TT-PE_U0_S1_Tingredient_P0.25_DROP1_mdeberta_E9_DEV97.0 | 
	pgajo | 2024-02-13T12:38:17Z | 93 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "deberta-v2",
  "question-answering",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2024-02-13T12:37:19Z | 
	---
{}
---
Model description:
    Model: microsoft/mdeberta-v3-base
    Dataset: TASTEset
    Unshuffled ratio: ['0']
    Shuffled ratio: ['1']
    Best exact match epoch: 9
    Best exact match: 96.98
    Best epoch: 9
    Drop duplicates: ['1']
    Max epochs = 10
    Optimizer lr = 3e-05
    Optimizer eps = 1e-08
    Batch size = 8
    Dataset path = pgajo/EW-TT-PE_U0_S1_Tingredient_P0.25_DROP1_mdeberta
    
Results
|   epoch |   train_loss |   train_f1 |   train_exact |   dev_loss |   dev_f1 |   dev_exact |   test_loss |   test_f1 |   test_exact |
|--------:|-------------:|-----------:|--------------:|-----------:|---------:|------------:|------------:|----------:|-------------:|
|       1 |         1.41 |      66.06 |         58.82 |       0.26 |    94.64 |       90.93 |           0 |         0 |            0 |
|       2 |         0.17 |      95.69 |         93.18 |       0.2  |    96.46 |       94.78 |           0 |         0 |            0 |
|       3 |         0.06 |      98.31 |         97.45 |       0.19 |    97.22 |       95.05 |           0 |         0 |            0 |
|       4 |         0.05 |      98.68 |         97.93 |       0.22 |    96.47 |       94.78 |           0 |         0 |            0 |
|       5 |         0.03 |      99.55 |         99.17 |       0.23 |    97    |       95.33 |           0 |         0 |            0 |
|       6 |         0.04 |      99.02 |         98.55 |       0.24 |    97.67 |       95.6  |           0 |         0 |            0 |
|       7 |         0.03 |      99.34 |         98.97 |       0.21 |    96.57 |       94.78 |           0 |         0 |            0 |
|       8 |         0.04 |      99.02 |         98.55 |       0.22 |    96.37 |       94.23 |           0 |         0 |            0 |
|       9 |         0.02 |      99.52 |         99.24 |       0.19 |    98.17 |       96.98 |           0 |         0 |            0 |
|      10 |         0.01 |      99.68 |         99.52 |       0.24 |    96.08 |       94.23 |           0 |         0 |            0 | | 
| 
	paulml/OGNO-7B-GGUF | 
	paulml | 2024-02-13T12:34:49Z | 2 | 1 | null | 
	[
  "gguf",
  "merge",
  "mergekit",
  "lazymergekit",
  "liminerity/Omningotex-7b-slerp",
  "eren23/dpo-binarized-NeutrixOmnibe-7B",
  "base_model:eren23/dpo-binarized-NeutrixOmnibe-7B",
  "base_model:merge:eren23/dpo-binarized-NeutrixOmnibe-7B",
  "base_model:liminerity/Omningotex-7b-slerp",
  "base_model:merge:liminerity/Omningotex-7b-slerp",
  "license:cc-by-nc-4.0",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T10:28:49Z | 
	---
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
base_model:
- liminerity/Omningotex-7b-slerp
- eren23/dpo-binarized-NeutrixOmnibe-7B
license: cc-by-nc-4.0
---
# As most of the new merges, the quantized version is not working properly.
# OGNO-7B
OGNO-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp)
* [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B)
## 🧩 Configuration
```yaml
slices:
  - sources:
      - model: liminerity/Omningotex-7b-slerp
        layer_range: [0, 32]
      - model: eren23/dpo-binarized-NeutrixOmnibe-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/Omningotex-7b-slerp
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "paulml/OGNO-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | 
| 
	ArianAskari/SOLID-SFT-DPO-MixQV4-SOLIDChosen-SFTRejected-Zephyr-7b-beta | 
	ArianAskari | 2024-02-13T12:32:42Z | 6 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "conversational",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T12:24:57Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	NbAiLab/nb-whisper-small-verbatim | 
	NbAiLab | 2024-02-13T12:30:19Z | 174 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "jax",
  "tensorboard",
  "onnx",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "audio",
  "asr",
  "hf-asr-leaderboard",
  "no",
  "nb",
  "nn",
  "en",
  "dataset:NbAiLab/ncc_speech",
  "dataset:NbAiLab/NST",
  "dataset:NbAiLab/NPSC",
  "arxiv:2212.04356",
  "base_model:openai/whisper-small",
  "base_model:quantized:openai/whisper-small",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T10:08:16Z | 
	---
license: apache-2.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
base_model: openai/whisper-small
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
  example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
  example_title: FLEURS sample 2
---
# Finetuned Verbatim model. 
This model is trained 200 additional steps on top of the model below. This makes it outputting only text in lowercase and without punctation. It is also considerably more verbatim, and will not make any attempt at correcting grammatical errors in the text
# NB-Whisper Small Verbatim
Introducing the **_Norwegian NB-Whisper Small Verbatim model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny       | 39M        | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base       | 74M        | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small      | 244M       | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium     | 769M       | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large      | 1550M      | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny       | 39M        | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base       | 74M        | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small      | 244M       | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium     | 769M       | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large      | 1550M      | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-small](https://huggingface.co/openai/whisper-small)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries. 
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium-verbatim")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
 'chunks': [{'timestamp': (0.0, 5.46),
   'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
  {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
  {'timestamp': (8.68, 16.64),
   'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
  {'timestamp': (16.64, 13.3),
   'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
  {'timestamp': (13.32, 30.28),
   'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
  {'timestamp': (32.52, 39.16),
   'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
  {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
  {'timestamp': (42.0, 46.74),
   'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
  {'timestamp': (46.74, 51.12),
   'text': ' og jenter og gutter som er glad i hverandre.'},
  {'timestamp': (51.16, 57.42),
   'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
  {'timestamp': (57.42, 64.3),
   'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
  {'timestamp': (64.34, 71.24),
   'text': ' Med andre ord, Norge er dere. Norge er oss.'},
  {'timestamp': (71.24, 78.04),
   'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
  {'timestamp': (78.12, 84.68),
   'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
  {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
  "chunks": [
    {"text": "Nordmenn", "timestamp": [0.72, 1.42]},
    {"text": "er", "timestamp": [1.42, 1.74]},
    // ... more chunks ...
    {"text": "raushet.", "timestamp": [83.1, 84.88]}
  ]
  }
}
```
Nynorsk:
```json
{
  {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
  {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. 
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav                                        
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model.bin -O models/nb-small-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-small/resolve/main/ggml-model-q5_0.bin -O models/nb-small-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-small-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-small-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. 
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium-verbatim --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Small Verbatim model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
 | 
| 
	NbAiLab/nb-whisper-medium | 
	NbAiLab | 2024-02-13T12:29:58Z | 517 | 4 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "jax",
  "tensorboard",
  "onnx",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "audio",
  "asr",
  "hf-asr-leaderboard",
  "no",
  "nb",
  "nn",
  "en",
  "dataset:NbAiLab/ncc_speech",
  "dataset:NbAiLab/NST",
  "dataset:NbAiLab/NPSC",
  "arxiv:2212.04356",
  "base_model:openai/whisper-medium",
  "base_model:quantized:openai/whisper-medium",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T10:07:32Z | 
	---
license: apache-2.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
base_model: openai/whisper-medium
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
  example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
  example_title: FLEURS sample 2
---
# NB-Whisper Medium
Introducing the **_Norwegian NB-Whisper Medium model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny       | 39M        | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base       | 74M        | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small      | 244M       | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium     | 769M       | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large      | 1550M      | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny       | 39M        | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base       | 74M        | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small      | 244M       | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium     | 769M       | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large      | 1550M      | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-medium](https://huggingface.co/openai/whisper-medium)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries. 
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-medium")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
 'chunks': [{'timestamp': (0.0, 5.46),
   'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
  {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
  {'timestamp': (8.68, 16.64),
   'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
  {'timestamp': (16.64, 13.3),
   'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
  {'timestamp': (13.32, 30.28),
   'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
  {'timestamp': (32.52, 39.16),
   'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
  {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
  {'timestamp': (42.0, 46.74),
   'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
  {'timestamp': (46.74, 51.12),
   'text': ' og jenter og gutter som er glad i hverandre.'},
  {'timestamp': (51.16, 57.42),
   'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
  {'timestamp': (57.42, 64.3),
   'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
  {'timestamp': (64.34, 71.24),
   'text': ' Med andre ord, Norge er dere. Norge er oss.'},
  {'timestamp': (71.24, 78.04),
   'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
  {'timestamp': (78.12, 84.68),
   'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
  {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
  "chunks": [
    {"text": "Nordmenn", "timestamp": [0.72, 1.42]},
    {"text": "er", "timestamp": [1.42, 1.74]},
    // ... more chunks ...
    {"text": "raushet.", "timestamp": [83.1, 84.88]}
  ]
  }
}
```
Nynorsk:
```json
{
  {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
  {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. 
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav                                        
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-medium/resolve/main/ggml-model.bin -O models/nb-medium-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-medium/resolve/main/ggml-model-q5_0.bin -O models/nb-medium-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-medium-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-medium-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. 
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-medium --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Medium model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
 | 
| 
	NbAiLab/nb-whisper-large-verbatim | 
	NbAiLab | 2024-02-13T12:29:51Z | 567 | 2 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "jax",
  "tensorboard",
  "onnx",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "audio",
  "asr",
  "hf-asr-leaderboard",
  "no",
  "nb",
  "nn",
  "en",
  "dataset:NbAiLab/ncc_speech",
  "dataset:NbAiLab/NST",
  "dataset:NbAiLab/NPSC",
  "arxiv:2212.04356",
  "base_model:openai/whisper-large",
  "base_model:quantized:openai/whisper-large",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T10:08:03Z | 
	---
license: apache-2.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
base_model: openai/whisper-large
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
  example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
  example_title: FLEURS sample 2
---
# Finetuned Verbatim model. 
This model is trained 200 additional steps on top of the model below. This makes it outputting only text in lowercase and without punctation. It is also considerably more verbatim, and will not make any attempt at correcting grammatical errors in the text
# NB-Whisper Large
Introducing the **_Norwegian NB-Whisper Large model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny       | 39M        | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base       | 74M        | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small      | 244M       | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium     | 769M       | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large      | 1550M      | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny       | 39M        | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base       | 74M        | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small      | 244M       | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium     | 769M       | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large      | 1550M      | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-large](https://huggingface.co/openai/whisper-large)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries. 
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-large-verbatim")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
 'chunks': [{'timestamp': (0.0, 5.46),
   'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
  {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
  {'timestamp': (8.68, 16.64),
   'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
  {'timestamp': (16.64, 13.3),
   'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
  {'timestamp': (13.32, 30.28),
   'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
  {'timestamp': (32.52, 39.16),
   'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
  {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
  {'timestamp': (42.0, 46.74),
   'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
  {'timestamp': (46.74, 51.12),
   'text': ' og jenter og gutter som er glad i hverandre.'},
  {'timestamp': (51.16, 57.42),
   'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
  {'timestamp': (57.42, 64.3),
   'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
  {'timestamp': (64.34, 71.24),
   'text': ' Med andre ord, Norge er dere. Norge er oss.'},
  {'timestamp': (71.24, 78.04),
   'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
  {'timestamp': (78.12, 84.68),
   'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
  {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
  "chunks": [
    {"text": "Nordmenn", "timestamp": [0.72, 1.42]},
    {"text": "er", "timestamp": [1.42, 1.74]},
    // ... more chunks ...
    {"text": "raushet.", "timestamp": [83.1, 84.88]}
  ]
  }
}
```
Nynorsk:
```json
{
  {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
  {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. 
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav                                        
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-large/resolve/main/ggml-model.bin -O models/nb-large-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-large/resolve/main/ggml-model-q5_0.bin -O models/nb-large-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-large-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-large-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. 
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-large-verbatim --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Large model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
 | 
| 
	NbAiLab/nb-whisper-base | 
	NbAiLab | 2024-02-13T12:29:46Z | 120 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tf",
  "jax",
  "tensorboard",
  "onnx",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "audio",
  "asr",
  "hf-asr-leaderboard",
  "no",
  "nb",
  "nn",
  "en",
  "dataset:NbAiLab/ncc_speech",
  "dataset:NbAiLab/NST",
  "dataset:NbAiLab/NPSC",
  "arxiv:2212.04356",
  "base_model:openai/whisper-base",
  "base_model:quantized:openai/whisper-base",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T10:07:48Z | 
	---
license: apache-2.0
language:
- 'no'
- nb
- nn
- en
datasets:
- NbAiLab/ncc_speech
- NbAiLab/NST
- NbAiLab/NPSC
base_model: openai/whisper-base
tags:
- audio
- asr
- automatic-speech-recognition
- hf-asr-leaderboard
metrics:
- wer
- cer
library_name: transformers
pipeline_tag: automatic-speech-recognition
widget:
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/1/audio/audio.mp3
  example_title: FLEURS sample 1
- src: https://datasets-server.huggingface.co/assets/google/fleurs/--/nb_no/train/4/audio/audio.mp3
  example_title: FLEURS sample 2
---
# NB-Whisper Base
Introducing the **_Norwegian NB-Whisper Base model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
| Model Size | Parameters | Model |
|------------|------------|------------|
| Tiny       | 39M        | [NB-Whisper Tiny](https://huggingface.co/NbAiLab/nb-whisper-tiny) |
| Base       | 74M        | [NB-Whisper Base](https://huggingface.co/NbAiLab/nb-whisper-base) |
| Small      | 244M       | [NB-Whisper Small](https://huggingface.co/NbAiLab/nb-whisper-small) |
| Medium     | 769M       | [NB-Whisper Medium](https://huggingface.co/NbAiLab/nb-whisper-medium) |
| Large      | 1550M      | [NB-Whisper Large](https://huggingface.co/NbAiLab/nb-whisper-large) |
### Verbatim Model
While the main models are suitable for most transcription task, we demonstrate how easy it is to change the output of the main model. The following models are trained 250 additional steps from the main models above, and might be suitable for more targetted use cases:
- **Verbatim version**: This lower-cased variant is more literal and suitable for tasks requiring detailed transcription, such as linguistic analysis.
| Model Size | Parameters | Semantic version |
|------------|------------|------------------|
| Tiny       | 39M        | [Tiny - semantic](https://huggingface.co/NbAiLab/nb-whisper-tiny-semantic) |
| Base       | 74M        | [Base - semantic](https://huggingface.co/NbAiLab/nb-whisper-base-semantic) |
| Small      | 244M       | [Small - semantic](https://huggingface.co/NbAiLab/nb-whisper-small-semantic) |
| Medium     | 769M       | [Medium - semantic](https://huggingface.co/NbAiLab/nb-whisper-medium-semantic) |
| Large      | 1550M      | [Large - semantic](https://huggingface.co/NbAiLab/nb-whisper-large-semantic) |
### Model Description
- **Developed by:** [NB AI-Lab](https://ai.nb.no/)
- **Shared by:** [NB AI-Lab](https://ai.nb.no/)
- **Model type:** `whisper`
- **Language(s) (NLP):** Norwegian, Norwegian Bokmål, Norwegian Nynorsk, English
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
- **Trained from model:** [openai/whisper-base](https://huggingface.co/openai/whisper-base)
- **Code Repository:** https://github.com/NbAiLab/nb-whisper/
- **Paper:** _Coming soon_
- **Demo:** _See Spaces on this page_
## How to Use the Models
### Online Demos
You can try the models directly through the HuggingFace Inference API, accessible on the right side of this page. Be aware that initially, the model needs to load and will run on limited CPU capacity, which might be slow. To enhance your experience, we are temporarily hosting some models on TPUs for a few days, significantly boosting their performance. Explore these under the **Spaces** section on the [Main Page](https://huggingface.co/NbAiLab/).
### Local Setup with HuggingFace
Alternatively, you can run the models locally. The Tiny, Base, and Small models are optimized for CPU execution. For the Medium and Large models, we recommend a system equipped with a GPU to ensure efficient processing. Setting up and using these models with HuggingFace's Transformers is straightforward, provided you have [Python](https://www.python.org/downloads/) installed on your machine. For practical demonstrations, refer to examples using this [sample mp3 file](https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3).
```bash
# Download the sample file
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
# Install necessary libraries. 
$ pip install transformers>=4.35.2
```
After this is done, you should be able to run this in Python:
```python
from transformers import pipeline
# Load the model
asr = pipeline("automatic-speech-recognition", "NbAiLabBeta/nb-whisper-base")
#transcribe
asr("king.mp3", generate_kwargs={'task': 'transcribe', 'language': 'no'})
```
<details>
<summary>Expected output</summary>
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra.'}
}
```
</details>
#### Extended HuggingFace
Examining the output above, we see that there are multiple repetitions at the end. This is because the video is longer than 30 seconds. By passing the ```chunk_lengt_s``` argument, we can transcribe longer file. Our experience is that we get slightly better result by setting that to 28 seconds instead of the default 30 seconds. We also recommend setting the beam size to 5 if possible. This greatly increases the accuracy but takes a bit longer and requires slightly more memory. The examples below also illustrates how to transcribe to English or Nynorsk, and how to get timestamps for sentences and words.
```python
# Long Transcripts
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Increase accuracy by setting beam size to 5
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'num_beams': 5, 'task': 'transcribe', 'language': 'no'})
# Return Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps=True, generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Return Word Level Timestamps
asr("king.mp3", chunk_length_s=28, return_timestamps="word", generate_kwargs={'task': 'transcribe', 'language': 'no'})
# Transcribe to Nynorsk
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'nn'})
# Transcribe to English
asr("king.mp3", chunk_length_s=28, generate_kwargs={'task': 'transcribe', 'language': 'en'})
```
<details>
<summary>Expected output</summary>
Long transcripts:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}
}
```
Timestamps:
```json
{
  {'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra. Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.',
 'chunks': [{'timestamp': (0.0, 5.46),
   'text': ' Nordmenn er nordlendinger, trøndere, sørlendinger'},
  {'timestamp': (5.52, 8.68), 'text': ' og folk fra alle andre regioner.'},
  {'timestamp': (8.68, 16.64),
   'text': ' Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria.'},
  {'timestamp': (16.64, 13.3),
   'text': ' Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi er fra.'},
  {'timestamp': (13.32, 30.28),
   'text': ' Hvilken nasjonalitet vi er fra. hvilken nasjonalitet vi tilhører.'},
  {'timestamp': (32.52, 39.16),
   'text': ' Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres'},
  {'timestamp': (39.16, 42.0), 'text': ' innenfor landegrenser.'},
  {'timestamp': (42.0, 46.74),
   'text': ' Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter,'},
  {'timestamp': (46.74, 51.12),
   'text': ' og jenter og gutter som er glad i hverandre.'},
  {'timestamp': (51.16, 57.42),
   'text': ' Nordmenn trommer på Gud, Allah, Altet og ingenting.'},
  {'timestamp': (57.42, 64.3),
   'text': ' Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes.'},
  {'timestamp': (64.34, 71.24),
   'text': ' Med andre ord, Norge er dere. Norge er oss.'},
  {'timestamp': (71.24, 78.04),
   'text': ' Mitt største håp for Norge er at vi skal klare å ta vare på hverandre,'},
  {'timestamp': (78.12, 84.68),
   'text': ' at vi skal bygge dette landet videre på tillit, fellesskap og raushet.'}]}
}
```
Word Level Timestamps:
```json
{
  {"text": "Nordmenn er nordlendinger, trøndere, sørlendinger og folk fra alle andre regioner. Nordmenn er også innvandret fra Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikke alltid så lett å si hvor vi er fra, hvilken nasjonalitet vi tilhører. Det vi kaller hjem, er der hjertet vårt er, og det kan ikke alltid plasseres innenfor landegrenser. Nordmenn er jenter som er glad i jenter, gutter som er glad i gutter, og jenter og gutter som er glad i hverandre. Nordmenn trommer på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbilis og Kari Bremnes. Med andre ord, Norge er dere. Norge er oss. Mitt største håp for Norge er at vi skal klare å ta vare på hverandre, at vi skal bygge dette landet videre på tillit, fellesskap og raushet.",
  "chunks": [
    {"text": "Nordmenn", "timestamp": [0.72, 1.42]},
    {"text": "er", "timestamp": [1.42, 1.74]},
    // ... more chunks ...
    {"text": "raushet.", "timestamp": [83.1, 84.88]}
  ]
  }
}
```
Nynorsk:
```json
{
  {"text": "Nordmenn er nordlendingar, trøndarar, sørlendingar og folk frå alle andre regionar. Nordmenn er også innvandra frå Afghanistan, Pakistan, Polen, Sverige, Somalia og Syria. Det er ikkje alltid så lett å seie kvar vi er frå, kva nasjonalitet vi tilhøyrer. Det vi kallar heim, er der hjartet vårt er, og det kan ikkje alltid plasserast innanfor landegrenser. Nordmenn er jenter som er glad i jenter, gutar som erade i gutar, og jenter og gutar som er glade i kvarandre. Nordmenn trommar på Gud, Allah, Altet og ingenting. Nordmenn liker Grieg, Kygo, Helbiles og Kari Bremnes. Med andre ord, Noreg er dere! Noreg er oss. Mitt største håp for Noreg er at vi skal klare å ta vare på kvarandre, at vi skal byggje dette landet vidare på tillit, fellesskap og raushet."}
}
```
English:
```json
{
  {"text": "Norwegians are Norwegians, trønders, southerners and people from all other regions. Norwegians are also invaded from Afghanistan, Pakistan, Poland, Sweden, Somalia and Suria. It is not always so easy to say where we are from, what nationality we belong to. What we call home is where our heart is, and it cannot always be placed within national borders. Norwegians are girls who like girls, boys who like boys, and girls and boys who like each other. Norwegians thrump on God, Allah, Altet and nothing. Norwegians like Grieg, Kygo, Helbilis and Kari Bremnes. In other words, Norway is you. Norway is us. My biggest hope for Norway is that we should be able to take care of each other, that we should build this country on trust, community and generosity."}
}
```
</details>
### Whisper CPP
Whisper CPP is a C++ implementation of the Whisper model, offering the same functionalities with the added benefits of C++ efficiency and performance optimizations. This allows embedding any Whisper model into a binary file, facilitating the development of real applications. However, it requires some familiarity with compiling C++ programs. Their [homepage](https://github.com/ggerganov/whisper.cpp) provides examples of how to build applications, including real-time transcription. 
We have converted this model to the ggml-format model used by Whisper CPP binaries. The file can be downloaded [here](blob/main/ggml-model.bin), and a `q5_0` quantized version is also available [here](blob/main/ggml-model-q5_0.bin).
```bash
# We can download and compile whisper.cpp
$ git clone --depth 1 https://github.com/ggerganov/whisper.cpp --branch v1.5.1
$ cd whisper.cpp/
$ make
# We also need to convert the audio to WAV as that is the only format supported by whisper.cpp
$ wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/king.mp3
$ ffmpeg -i king.mp3 -ar 16000 -ac 1 -c:a pcm_s16le king.wav                                        
# Lets download the two ggml-files from this site
wget -N https://huggingface.co/NbAiLab/nb-whisper-base/resolve/main/ggml-model.bin -O models/nb-base-ggml-model.bin
wget -N https://huggingface.co/NbAiLab/nb-whisper-base/resolve/main/ggml-model-q5_0.bin -O models/nb-base-ggml-model-q5_0.bin
# And run it with the f16 default model
$ ./main -l no -m models/nb-base-ggml-model.bin king.wav
# Or the quantized version
$ ./main -l no -m models/nb-base-ggml-model-q5_0.bin king.wav
```
### WhisperX and Speaker Diarization
Speaker diarization is a technique in natural language processing and automatic speech recognition that identifies and separates different speakers in an audio recording. It segments the audio into parts based on who is speaking, enhancing the quality of transcribing meetings or phone calls. We find that [WhisperX](https://github.com/m-bain/whisperX) is the easiest way to use our models for diarizing speech. In addition, WhisperX is using phoneme-based Wav2Vec-models for improving the alignment of the timestamps. As of December 2023 it also has native support for using the nb-wav2vec-models. It currently uses [PyAnnote-audio](https://github.com/pyannote/pyannote-audio) for doing the actual diarization. This package has a fairly strict licence where you have to agree to user terms. Follow the instructions below. 
```bash
# Follow the install instructions on https://github.com/m-bain/whisperX
# Make sure you have a HuggingFace account and have agreed to the pyannote terms
# Log in (or supply HF Token in command line)
huggingface-cli login
# Download a test file
wget -N https://github.com/NbAiLab/nb-whisper/raw/main/audio/knuthamsun.mp3
# Optional. If you get complains about not support for Norwegian, do:
pip uninstall whisperx && pip install git+https://github.com/m-bain/whisperx.git@8540ff5985fceee764acbed94f656063d7f56540
# Transcribe the test file. All transcripts will end up in the directory of the mp3-file
whisperx knuthamsun.mp3 --model NbAiLabBeta/nb-whisper-base --language no --diarize
```
You can also run WhisperX from Python. Please take a look at the instructions on [WhisperX homepage](https://github.com/m-bain/whisperX).
### API
Instructions for accessing the models via a simple API are included in the demos under Spaces. Note that these demos are temporary and will only be available for a few weeks.
## Training Data
The training data originates from Språkbanken and the National Library of Norway's digital collection, including:
- NST Norwegian ASR Database (16 kHz) and its corresponding dataset
- Transcribed speeches from the Norwegian Parliament by Språkbanken
- TV broadcast (NRK) subtitles (NLN digital collection)
- Audiobooks (NLN digital collection)
## Downstream Use
The models, especially the smaller ones, may exhibit occasional hallucinations and may drop parts of the transcript. They are designed to convert spoken language into grammatically correct written sentences, which might not always be word-for-word translations. We have made two extra model variant for users that want a different transcription style. We encourage users to try the models themselves to get a better understanding.
## Bias, Risks, and Limitations
Using these models without adequate risk assessment and mitigation could be considered irresponsible. They may contain biases or other undesirable distortions. Users who deploy these models or integrate them into systems or services are responsible for mitigating risks and complying with applicable AI regulations. The National Library of Norway, as the model owner, disclaims liability for any outcomes resulting from third-party use of these models.
### Software
The model was trained using Jax/Flax and converted to PyTorch, Tensorflow, whisper.cpp, and ONXX formats. These are available under `Files and versions`. We welcome requests for conversion to other formats. All training code and scripts are released under the Apache License 2.0 in the GitHub repository [nb-whisper](https://github.com/NbAiLab/nb-whisper/).
## Citation & Contributors
The NB-Whisper Base model is a product of the NoSTram project led by Per Egil Kummervold ([@pere](https://huggingface.co/pere)) at the National Library of Norway. Key contributors include Javier de la Rosa ([@versae](https://huggingface.co/versae)), Freddy Wetjen ([@freddyw](https://huggingface.co/freddyw)), and Rolv-Arild Braaten ([@Rolv-Arild](https://huggingface.co/Rolv-Arild)). NB AI-Lab, under the direction of Svein Arne Brygfjeld ([@Brygfjeld](https://huggingface.co/Brygfjeld)), supported the project's successful completion. A detailed paper on our process and findings is forthcoming.
## Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.
## Acknowledgements
Our gratitude extends to [Google TPU Research Cloud](https://sites.research.google/trc/about/) for training resources, Google Cloud for translation credits, and HuggingFace's Sanchit Ghandi for technical support. A special thank you to Per Erik Solberg at Språkbanken for the collaboration on the Stortinget corpus.
## Contact
For feedback, technical concerns, or collaboration inquiries, please contact <a rel="noopener nofollow" href="mailto:[email protected]">[email protected]</a>. If you plan to include this model in your research, contact us for the latest information on our upcoming paper for citation purposes.
 | 
| 
	malaysia-ai/malay-sentiment-deberta-xsmall | 
	malaysia-ai | 2024-02-13T12:18:40Z | 108 | 1 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "deberta-v2",
  "text-classification",
  "sentiment",
  "ms",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T12:06:53Z | 
	---
language:
- ms
tags:
- sentiment
---
## Malay-Language Sentiment Classification
# Overview
This model is a fine-tuned checkpoint of [Deberta-V3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall). It enables binary sentiment analysis for Malay-language text. For each instance, it predicts either positive (1) or negative (0) sentiment. Model is trained on all data from https://github.com/mesolitica/malaysian-dataset/tree/master/sentiment.
# Use in a Hugging Face pipeline
The easiest way to use the model for single predictions is Hugging Face's [sentiment analysis pipeline](https://huggingface.co/transformers/quicktour.html#getting-started-on-a-task-with-a-pipeline), which only needs a couple lines of code as shown in the following example:
```
from transformers import pipeline
sentiment_analysis = pipeline("sentiment-analysis",model="malaysia-ai/deberta-v3-xsmall-malay-sentiment")
print(sentiment_analysis("saya comel"))
``` | 
| 
	mach-12/ecommerce-ner | 
	mach-12 | 2024-02-13T12:05:25Z | 105 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "distilbert",
  "token-classification",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2024-02-13T12:04:51Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	mlabonne/Monarch-7B-slerp | 
	mlabonne | 2024-02-13T11:47:35Z | 5 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "merge",
  "mergekit",
  "lazymergekit",
  "base_model:mlabonne/NeuBeagle-7B",
  "base_model:merge:mlabonne/NeuBeagle-7B",
  "base_model:mlabonne/OmniTruthyBeagle-7B-v0",
  "base_model:merge:mlabonne/OmniTruthyBeagle-7B-v0",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T11:39:41Z | 
	---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/OmniTruthyBeagle-7B-v0
- mlabonne/NeuBeagle-7B
---
# Monarch-7B-slerp
Monarch-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
## 🧩 Configuration
```yaml
slices:
  - sources:
      - model: mlabonne/OmniTruthyBeagle-7B-v0
        layer_range: [0, 32]
      - model: mlabonne/NeuBeagle-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: mlabonne/OmniTruthyBeagle-7B-v0
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Monarch-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | 
| 
	bhuvanmdev/flan-t5-google-resume-parser | 
	bhuvanmdev | 2024-02-13T11:43:46Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T10:18:46Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	Maaz911/NewModal-Falcon-1B | 
	Maaz911 | 2024-02-13T11:41:08Z | 14 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "falcon",
  "text-generation",
  "custom_code",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "4-bit",
  "bitsandbytes",
  "region:us"
] | 
	text-generation | 2024-02-13T11:41:06Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	jyothimaria/my-pet-dog | 
	jyothimaria | 2024-02-13T11:39:34Z | 1 | 0 | 
	diffusers | 
	[
  "diffusers",
  "safetensors",
  "NxtWave-GenAI-Webinar",
  "text-to-image",
  "stable-diffusion",
  "license:creativeml-openrail-m",
  "autotrain_compatible",
  "endpoints_compatible",
  "diffusers:StableDiffusionPipeline",
  "region:us"
] | 
	text-to-image | 2024-02-13T11:24:38Z | 
	---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by jyothimaria following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: SCET 222040
Sample pictures of this concept:
  
  
  
  
  .jpg)
      .jpg)
      .jpg)
      .jpg)
      .jpg)
      
 | 
| 
	MatrixAwakens/my-pet-xzg-cat | 
	MatrixAwakens | 2024-02-13T11:33:30Z | 0 | 0 | 
	diffusers | 
	[
  "diffusers",
  "safetensors",
  "NxtWave-GenAI-Webinar",
  "text-to-image",
  "stable-diffusion",
  "license:creativeml-openrail-m",
  "autotrain_compatible",
  "endpoints_compatible",
  "diffusers:StableDiffusionPipeline",
  "region:us"
] | 
	text-to-image | 2024-02-13T11:26:53Z | 
	---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-XZG-Cat Dreambooth model trained by MatrixAwakens following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21CS02003
Sample pictures of this concept:
  
      
 | 
| 
	IHaBiS/maid-yuzu-v7-exl2-rpcal | 
	IHaBiS | 2024-02-13T11:31:15Z | 15 | 1 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mixtral",
  "text-generation",
  "mergekit",
  "merge",
  "base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
  "base_model:merge:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
  "base_model:smelborp/MixtralOrochi8x7B",
  "base_model:merge:smelborp/MixtralOrochi8x7B",
  "base_model:ycros/BagelMIsteryTour-v2-8x7B",
  "base_model:merge:ycros/BagelMIsteryTour-v2-8x7B",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T11:26:41Z | 
	---
base_model:
- ycros/BagelMIsteryTour-v2-8x7B
- smelborp/MixtralOrochi8x7B
- cognitivecomputations/dolphin-2.7-mixtral-8x7b
library_name: transformers
tags:
- mergekit
- merge
---
# maid-yuzu-v7
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
I don't know anything about merges, so this may be a stupid method, but I was curious how the models would be merged if I took this approach.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
This model is a model that first merges Model [Orochi](https://huggingface.co/smelborp/MixtralOrochi8x7B) with Model [dolphin](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) with a 0.15 SLERP option, and then merges Model [BagelMIsteryTour](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B) with a 0.2 SLERP option based on the merged model.
### Models Merged
The following models were included in the merge:
* [ycros/BagelMIsteryTour-v2-8x7B](https://huggingface.co/ycros/BagelMIsteryTour-v2-8x7B)
* ../maid-yuzu-v7-base
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
  model:
    path: ../maid-yuzu-v7-base
dtype: bfloat16
merge_method: slerp
parameters:
  t:
  - value: 0.2
slices:
- sources:
  - layer_range: [0, 32]
    model:
      model:
        path: ../maid-yuzu-v7-base
  - layer_range: [0, 32]
    model:
      model:
        path: ycros/BagelMIsteryTour-v2-8x7B
```
 | 
| 
	OmarHaroon01/t5_pretrain_final_final_final_kaggle | 
	OmarHaroon01 | 2024-02-13T11:27:19Z | 93 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "t5",
  "text2text-generation",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text2text-generation | 2024-02-13T10:44:25Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	shadowml/OmnixBeagle-7B | 
	shadowml | 2024-02-13T11:24:51Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "merge",
  "mergekit",
  "lazymergekit",
  "base_model:Gille/StrangeMerges_21-7B-slerp",
  "base_model:finetune:Gille/StrangeMerges_21-7B-slerp",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T11:18:49Z | 
	---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- Gille/StrangeMerges_21-7B-slerp
---
# OmnixBeagle-7B
OmnixBeagle-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_21-7B-slerp](https://huggingface.co/Gille/StrangeMerges_21-7B-slerp)
## 🧩 Configuration
```yaml
models:
  - model: eren23/dpo-binarized-NeutrixOmnibe-7B
    # No parameters necessary for base model
  - model: Gille/StrangeMerges_21-7B-slerp
    parameters:
      density: 0.53
      weight: 0.6
merge_method: dare_ties
base_model: eren23/dpo-binarized-NeutrixOmnibe-7B
parameters:
  int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/OmnixBeagle-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | 
| 
	pgajo/mbert_EW-TT-PE_U0_S1_Tingredient_P0.75_DROP1_mbert_E9_DEV89.0 | 
	pgajo | 2024-02-13T11:24:40Z | 93 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "bert",
  "question-answering",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2024-02-13T11:23:48Z | 
	---
{}
---
Model description:
    Model: bert-base-multilingual-cased
    Dataset: TASTEset
    Unshuffled ratio: ['0']
    Shuffled ratio: ['1']
    Best exact match epoch: 9
    Best exact match: 89.29
    Best epoch: 9
    Drop duplicates: ['1']
    Max epochs = 10
    Optimizer lr = 3e-05
    Optimizer eps = 1e-08
    Batch size = 32
    Dataset path = pgajo/EW-TT-PE_U0_S1_Tingredient_P0.75_DROP1_mbert
    
Results
|   epoch |   train_loss |   train_f1 |   train_exact |   dev_loss |   dev_f1 |   dev_exact |   test_loss |   test_f1 |   test_exact |
|--------:|-------------:|-----------:|--------------:|-----------:|---------:|------------:|------------:|----------:|-------------:|
|       1 |         3.23 |      10.64 |          2.55 |       2.56 |    18.12 |        8.52 |           0 |         0 |            0 |
|       2 |         1.2  |      59    |         48.62 |       0.59 |    83.96 |       75.27 |           0 |         0 |            0 |
|       3 |         0.37 |      88.86 |         83.61 |       0.46 |    91.16 |       85.16 |           0 |         0 |            0 |
|       4 |         0.17 |      94.22 |         91.18 |       0.48 |    90.52 |       85.44 |           0 |         0 |            0 |
|       5 |         0.09 |      97.37 |         95.8  |       0.5  |    89.31 |       83.79 |           0 |         0 |            0 |
|       6 |         0.06 |      98.07 |         96.76 |       0.47 |    91.89 |       89.01 |           0 |         0 |            0 |
|       7 |         0.04 |      98.67 |         97.59 |       0.53 |    92.25 |       87.36 |           0 |         0 |            0 |
|       8 |         0.04 |      98.93 |         97.93 |       0.48 |    92.88 |       89.01 |           0 |         0 |            0 |
|       9 |         0.03 |      99.31 |         98.9  |       0.51 |    93.68 |       89.29 |           0 |         0 |            0 |
|      10 |         0.01 |      99.57 |         99.24 |       0.5  |    94.13 |       89.29 |           0 |         0 |            0 | | 
| 
	Hongsong/Policy_Gradient_Pixelcopter | 
	Hongsong | 2024-02-13T11:16:41Z | 0 | 0 | null | 
	[
  "Pixelcopter-PLE-v0",
  "reinforce",
  "reinforcement-learning",
  "custom-implementation",
  "deep-rl-class",
  "model-index",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T07:01:26Z | 
	---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Policy_Gradient_Pixelcopter
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: Pixelcopter-PLE-v0
      type: Pixelcopter-PLE-v0
    metrics:
    - type: mean_reward
      value: 34.30 +/- 21.43
      name: mean_reward
      verified: false
---
  # **Reinforce** Agent playing **Pixelcopter-PLE-v0**
  This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
  To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
   | 
| 
	xiongjie/test | 
	xiongjie | 2024-02-13T11:14:56Z | 62 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "opt",
  "text-generation",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "2-bit",
  "gptq",
  "region:us"
] | 
	text-generation | 2024-02-13T11:08:47Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	sduo/qq | 
	sduo | 2024-02-13T11:13:45Z | 1 | 0 | 
	diffusers | 
	[
  "diffusers",
  "text-to-image",
  "stable-diffusion",
  "lora",
  "template:sd-lora",
  "base_model:runwayml/stable-diffusion-v1-5",
  "base_model:adapter:runwayml/stable-diffusion-v1-5",
  "license:apache-2.0",
  "region:us"
] | 
	text-to-image | 2024-02-13T11:13:40Z | 
	---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
  output:
    url: images/AlenaAenami_Lights_1k.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: qq
license: apache-2.0
---
# replicate_lora
<Gallery />
## Trigger words
You should use `qq` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/sduo/qq/tree/main) them in the Files & versions tab.
 | 
| 
	konz00/Kunocchini-7b-GGUF | 
	konz00 | 2024-02-13T11:09:36Z | 47 | 2 | 
	transformers | 
	[
  "transformers",
  "gguf",
  "text-generation",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-10T07:52:24Z | 
	---
library_name: transformers
pipeline_tag: text-generation
---
GGUF version for [Test157t/Kunocchini-7b](https://huggingface.co/Test157t/Kunocchini-7b)
 | 
| 
	diffuser34/autotrain-uzdtm-nwkp2 | 
	diffuser34 | 2024-02-13T11:03:10Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "joblib",
  "autotrain",
  "tabular",
  "regression",
  "tabular-regression",
  "dataset:autotrain-uzdtm-nwkp2/autotrain-data",
  "endpoints_compatible",
  "region:us"
] | 
	tabular-regression | 2024-02-13T10:54:14Z | 
	---
tags:
- autotrain
- tabular
- regression
- tabular-regression
datasets:
- autotrain-uzdtm-nwkp2/autotrain-data
pipeline_tag: tabular-regression
library_name: transformers
---
# Model Trained Using AutoTrain
- Problem type: Tabular regression
## Validation Metrics
- r2: 0.5287307064016351
- mse: 3.103168000915719e+19
- mae: 2243863540.8
- rmse: 5570608585.168877
- rmsle: 8.027979609819264
- loss: 5570608585.168877
## Best Params
- learning_rate: 0.11299209471906922
- reg_lambda: 1.95078305416454e-06
- reg_alpha: 0.03568550183373181
- subsample: 0.6486218191662874
- colsample_bytree: 0.22654368454464396
- max_depth: 1
- early_stopping_rounds: 481
- n_estimators: 20000
- eval_metric: rmse
## Usage
```python
import json
import joblib
import pandas as pd
model = joblib.load('model.joblib')
config = json.load(open('config.json'))
features = config['features']
# data = pd.read_csv("data.csv")
data = data[features]
predictions = model.predict(data)  # or model.predict_proba(data)
# predictions can be converted to original labels using label_encoders.pkl
``` | 
| 
	seb1234/textual_inversion_doll | 
	seb1234 | 2024-02-13T11:02:33Z | 30 | 0 | 
	diffusers | 
	[
  "diffusers",
  "tensorboard",
  "safetensors",
  "stable-diffusion",
  "stable-diffusion-diffusers",
  "text-to-image",
  "textual_inversion",
  "base_model:runwayml/stable-diffusion-v1-5",
  "base_model:adapter:runwayml/stable-diffusion-v1-5",
  "license:creativeml-openrail-m",
  "autotrain_compatible",
  "endpoints_compatible",
  "diffusers:StableDiffusionPipeline",
  "region:us"
] | 
	text-to-image | 2024-02-13T10:46:58Z | 
	
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
    
# Textual inversion text2image fine-tuning - seb1234/textual_inversion_doll
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. 
 | 
| 
	umarigan/Trendyol-LLM-7b-chat-v0.1-GGUF | 
	umarigan | 2024-02-13T11:01:46Z | 0 | 0 | null | 
	[
  "gguf",
  "text-generation",
  "tr",
  "en",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T07:30:35Z | 
	---
language:
- tr
- en
pipeline_tag: text-generation
license: apache-2.0
---
<img src="https://huggingface.co/Trendyol/Trendyol-LLM-7b-chat-v0.1/resolve/main/llama-tr-image.jpeg"
alt="drawing" width="400"/>
# **Trendyol LLM GGUF Version**
Trendyol LLM is a generative model that is based on LLaMa2 7B model. This is the repository for the quantized chat model.
**Developer** Umar Igan
GGUF Version Created using following notebook: https://github.com/mlabonne/llm-course/blob/main/Quantize_Llama_2_models_using_GGUF_and_llama_cpp.ipynb
**Variations** Q5_K_M and Q4_K_M variations of GGUF.
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture** Trendyol LLM is an auto-regressive language model (based on LLaMa2 7b) that uses an optimized transformer architecture. The chat version is fine-tuned on 180K instruction sets with the following trainables by using LoRA
This is a quantized model of Trendyol LLM:
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/lora_diagram.png"
alt="drawing" width="600"/>
## Usage
```python
from llama_cpp import Llama
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm_p = AutoModelForCausalLM.from_pretrained("umarigan/Trendyol-LLM-7b-chat-v0.1-GGUF",
                                           model_file="trendyol-llm-7b-chat-v0.1.Q4_K_M.gguf",
                                           model_type="llama",
                                           gpu_layers=0)
# Chat Completion API
llm = Llama(model_path=llm_p.model_path, 
                        chat_format="llama-2")  # Set chat_format according to the model you are using
llm.create_chat_completion(
    messages = [
        {"role": "system", "content": "çocuk hikayeleri yazan bir yazarsın"},
        {
            "role": "user",
            "content": "köpekler hakkında bir çocuk hikayesi yaz"
        }
    ]
)
```
Output:
```
{'id': 'chatcmpl-0d665fb2-a92a-408c-bc03-78c32bccab0d',
 'object': 'chat.completion',
 'created': 1707822047,
 'model': '/root/.cache/huggingface/hub/models--umarigan--Trendyol-LLM-7b-chat-v0.1-GGUF/blobs/323878a8570093178040e78b438d5670c0fdae2aa614a8ed58e784d697d4db52',
 'choices': [{'index': 0,
   'message': {'role': 'assistant',
    'content': '  Bir zamanlar, ormanda yaşayan cesur ve sadık bir köpek varmış. O, her zaman arkadaşlarına yardım etmeye hazırdı ve asla korkmuyordu. Bir gün, ormanın derinliklerinde gizemli bir ses duydu ve araştırmaya karar verdi. Yol boyunca birçok yaratıkla karşılaştı ama hiçbirinin kimliğini bilmiyordu. Sonunda, gizemli sesin geldiği yere ulaştı ve sonunda onu buldu.'},
   'finish_reason': 'stop'}],
 'usage': {'prompt_tokens': 39, 'completion_tokens': 85, 'total_tokens': 124}}
``` | 
| 
	shykennys/distilbert-base-uncased_emotion_ft | 
	shykennys | 2024-02-13T11:00:03Z | 91 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:emotion",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T09:37:14Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion
      type: emotion
      config: split
      split: validation
      args: split
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.934
    - name: F1
      type: f1
      value: 0.9344783366934866
    - name: Precision
      type: precision
      value: 0.9052089351876242
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1529
- Accuracy: 0.934
- F1: 0.9345
- Precision: 0.9052
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| No log        | 1.0   | 250  | 0.2728          | 0.9155   | 0.9138 | 0.9034    |
| 0.5164        | 2.0   | 500  | 0.1793          | 0.9275   | 0.9280 | 0.8951    |
| 0.5164        | 3.0   | 750  | 0.1552          | 0.935    | 0.9354 | 0.9036    |
| 0.1258        | 4.0   | 1000 | 0.1529          | 0.934    | 0.9345 | 0.9052    |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.2+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
 | 
| 
	slc48/a2c-PandaReachDense-v3 | 
	slc48 | 2024-02-13T10:59:48Z | 0 | 0 | 
	stable-baselines3 | 
	[
  "stable-baselines3",
  "PandaReachDense-v3",
  "deep-reinforcement-learning",
  "reinforcement-learning",
  "model-index",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T10:55:45Z | 
	---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: PandaReachDense-v3
      type: PandaReachDense-v3
    metrics:
    - type: mean_reward
      value: -0.20 +/- 0.09
      name: mean_reward
      verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
 | 
| 
	mlabonne/Monarch-7B-dare | 
	mlabonne | 2024-02-13T10:59:07Z | 7 | 1 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "merge",
  "mergekit",
  "lazymergekit",
  "base_model:mlabonne/NeuBeagle-7B",
  "base_model:finetune:mlabonne/NeuBeagle-7B",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T10:54:09Z | 
	---
license: cc-by-nc-4.0
tags:
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/NeuBeagle-7B
---
# Monarch-7B-dare
Monarch-7B-dare is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
## 🧩 Configuration
```yaml
models:
  - model: mlabonne/OmniTruthyBeagle-7B-v0 
    # No parameters necessary for base model
  - model: mlabonne/NeuBeagle-7B
    parameters:
      density: 0.53
      weight: 0.45
merge_method: dare_ties
base_model: mlabonne/OmniTruthyBeagle-7B-v0 
parameters:
  int8_mask: true
dtype: bfloat16
random_seed: 0
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Monarch-7B-dare"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | 
| 
	longcule123/book_122 | 
	longcule123 | 2024-02-13T10:58:29Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T00:26:12Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	pgajo/mbert-xlwa-en-it_EW-TT-PE_U0_S1_Tingredient_P0.25_DROP1_mbert_E10_DEV87.0 | 
	pgajo | 2024-02-13T10:56:31Z | 93 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "bert",
  "question-answering",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2024-02-13T10:55:55Z | 
	---
{}
---
Model description:
    Model: pgajo/mbert-xlwa-en-it
    Dataset: TASTEset
    Unshuffled ratio: ['0']
    Shuffled ratio: ['1']
    Best exact match epoch: 10
    Best exact match: 86.54
    Best epoch: 10
    Drop duplicates: ['1']
    Max epochs = 10
    Optimizer lr = 3e-05
    Optimizer eps = 1e-08
    Batch size = 32
    Dataset path = pgajo/EW-TT-PE_U0_S1_Tingredient_P0.25_DROP1_mbert
    
Results
|   epoch |   train_loss |   train_f1 |   train_exact |   dev_loss |   dev_f1 |   dev_exact |   test_loss |   test_f1 |   test_exact |
|--------:|-------------:|-----------:|--------------:|-----------:|---------:|------------:|------------:|----------:|-------------:|
|       1 |         1.18 |      68.16 |         50.69 |       0.7  |    81.28 |       69.51 |           0 |         0 |            0 |
|       2 |         0.39 |      88.83 |         80.23 |       0.62 |    85.69 |       78.57 |           0 |         0 |            0 |
|       3 |         0.16 |      95.33 |         91.53 |       0.7  |    86.71 |       81.04 |           0 |         0 |            0 |
|       4 |         0.09 |      97.02 |         94.56 |       0.79 |    87.62 |       82.42 |           0 |         0 |            0 |
|       5 |         0.07 |      97.82 |         96.07 |       0.71 |    86.34 |       81.32 |           0 |         0 |            0 |
|       6 |         0.06 |      97.58 |         96.07 |       0.63 |    88.88 |       83.79 |           0 |         0 |            0 |
|       7 |         0.04 |      98.77 |         98    |       0.59 |    89.36 |       84.34 |           0 |         0 |            0 |
|       8 |         0.04 |      98.89 |         98.14 |       0.7  |    88.27 |       83.24 |           0 |         0 |            0 |
|       9 |         0.02 |      99.53 |         98.9  |       0.72 |    89.48 |       85.44 |           0 |         0 |            0 |
|      10 |         0.02 |      99.31 |         98.55 |       0.73 |    90.3  |       86.54 |           0 |         0 |            0 | | 
| 
	MaziyarPanahi/sqlcoder-7b-2-GGUF | 
	MaziyarPanahi | 2024-02-13T10:52:27Z | 141 | 8 | 
	transformers | 
	[
  "transformers",
  "gguf",
  "mistral",
  "quantized",
  "2-bit",
  "3-bit",
  "4-bit",
  "5-bit",
  "6-bit",
  "8-bit",
  "GGUF",
  "safetensors",
  "llama",
  "text-generation",
  "license:cc-by-sa-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "has_space",
  "text-generation-inference",
  "region:us",
  "base_model:defog/sqlcoder-7b-2",
  "base_model:quantized:defog/sqlcoder-7b-2"
] | 
	text-generation | 2024-02-13T10:37:51Z | 
	---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- gguf
- llama
- text-generation
- license:cc-by-sa-4.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: sqlcoder-7b-2-GGUF
base_model: defog/sqlcoder-7b-2
inference: false
model_creator: defog
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/sqlcoder-7b-2-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-2-GGUF)
- Model creator: [defog](https://huggingface.co/defog)
- Original model: [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2)
## Description
[MaziyarPanahi/sqlcoder-7b-2-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-2-GGUF) contains GGUF format model files for [defog/sqlcoder-7b-2](https://huggingface.co/defog/sqlcoder-7b-2).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
  <summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/sqlcoder-7b-2-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-2-GGUF) and below it, a specific filename to download, such as: sqlcoder-7b-2-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/sqlcoder-7b-2-GGUF sqlcoder-7b-2-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
  <summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/sqlcoder-7b-2-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-2-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/sqlcoder-7b-2-GGUF sqlcoder-7b-2-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sqlcoder-7b-2-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
  model_path="./sqlcoder-7b-2-GGUF.Q4_K_M.gguf",  # Download the model file first
  n_ctx=32768,  # The max sequence length to use - note that longer sequence lengths require much more resources
  n_threads=8,            # The number of CPU threads to use, tailor to your system and the resulting performance
  n_gpu_layers=35         # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
  "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
  max_tokens=512,  # Generate up to 512 tokens
  stop=["</s>"],   # Example stop token - not necessarily correct for this specific model! Please check before using.
  echo=True        # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sqlcoder-7b-2-GGUF.Q4_K_M.gguf", chat_format="llama-2")  # Set chat_format according to the model you are using
llm.create_chat_completion(
    messages = [
        {"role": "system", "content": "You are a story writing assistant."},
        {
            "role": "user",
            "content": "Write a story about llamas."
        }
    ]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) | 
| 
	th4tkh13m/amazon_shoe_reviews | 
	th4tkh13m | 2024-02-13T10:48:37Z | 98 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "base_model:distilbert/distilbert-base-uncased",
  "base_model:finetune:distilbert/distilbert-base-uncased",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-06-10T08:43:07Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: amazon_shoe_reviews
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_shoe_reviews
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
 | 
| 
	maramzarkaoui/llama2 | 
	maramzarkaoui | 2024-02-13T10:46:05Z | 2 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "autotrain",
  "text-generation",
  "license:other",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T10:03:24Z | 
	---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | 
| 
	Augustya07/Mistral-7B-Instruct-v0.2-function-calling-hotel-adapter | 
	Augustya07 | 2024-02-13T10:41:58Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T10:30:45Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	hiendang7613/xlmr-lstm-crf-resume-ner | 
	hiendang7613 | 2024-02-13T10:41:55Z | 144 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "xlm-roberta",
  "token-classification",
  "generated_from_trainer",
  "dataset:fjd_dataset",
  "base_model:FacebookAI/xlm-roberta-base",
  "base_model:finetune:FacebookAI/xlm-roberta-base",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	token-classification | 2023-11-19T13:22:37Z | 
	---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- fjd_dataset
model-index:
- name: xlmr-lstm-crf-resume-ner
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlmr-lstm-crf-resume-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the fjd_dataset dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1998
- eval_precision: 0.5659
- eval_recall: 0.6020
- eval_f1: 0.5834
- eval_accuracy: 0.9475
- eval_runtime: 51.9811
- eval_samples_per_second: 95.689
- eval_steps_per_second: 1.501
- epoch: 40.0
- step: 18400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
 | 
| 
	uyiosa/doctor_mistral | 
	uyiosa | 2024-02-13T10:36:35Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T10:36:04Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	gK29382231121/distilbert-base-uncased-finetuned-emotion_new | 
	gK29382231121 | 2024-02-13T10:34:57Z | 92 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "base_model:distilbert/distilbert-base-uncased",
  "base_model:finetune:distilbert/distilbert-base-uncased",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T10:34:49Z | 
	---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion_new
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion_new
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8847
- Accuracy: 0.8
- F1: 0.7333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5408        | 1.0   | 4    | 0.7674          | 0.8      | 0.7333 |
| 0.4368        | 2.0   | 8    | 0.7471          | 0.8      | 0.7333 |
| 0.3222        | 3.0   | 12   | 0.7318          | 0.8      | 0.7333 |
| 0.4061        | 4.0   | 16   | 0.7289          | 0.8      | 0.7333 |
| 0.3774        | 5.0   | 20   | 0.7732          | 0.8      | 0.7333 |
| 0.3304        | 6.0   | 24   | 0.7874          | 0.8      | 0.7333 |
| 0.3042        | 7.0   | 28   | 0.8036          | 0.8      | 0.7333 |
| 0.4571        | 8.0   | 32   | 0.8038          | 0.8      | 0.7333 |
| 0.1992        | 9.0   | 36   | 0.8271          | 0.8      | 0.7333 |
| 0.2661        | 10.0  | 40   | 0.8498          | 0.8      | 0.7333 |
| 0.2361        | 11.0  | 44   | 0.8582          | 0.8      | 0.7333 |
| 0.2292        | 12.0  | 48   | 0.8620          | 0.8      | 0.7333 |
| 0.2363        | 13.0  | 52   | 0.8678          | 0.8      | 0.7333 |
| 0.2574        | 14.0  | 56   | 0.8672          | 0.8      | 0.7333 |
| 0.5177        | 15.0  | 60   | 0.8668          | 0.8      | 0.7333 |
| 0.226         | 16.0  | 64   | 0.8726          | 0.8      | 0.7333 |
| 0.1726        | 17.0  | 68   | 0.8788          | 0.8      | 0.7333 |
| 0.2439        | 18.0  | 72   | 0.8823          | 0.8      | 0.7333 |
| 0.2005        | 19.0  | 76   | 0.8842          | 0.8      | 0.7333 |
| 0.2541        | 20.0  | 80   | 0.8847          | 0.8      | 0.7333 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
 | 
| 
	mii-llm/maestrale-chat-v0.3-alpha-sft | 
	mii-llm | 2024-02-13T10:34:49Z | 9 | 2 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "sft",
  "it",
  "chatml",
  "axolotl",
  "conversational",
  "license:cc-by-nc-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-04T07:36:23Z | 
	---
tags:
- sft
- it
- mistral
- chatml
- axolotl
model-index:
- name: maestrale-chat-v0.3-alpha
  results: []
license: cc-by-nc-4.0
language:
- it
prompt_template: >-
  <|im_start|>system {system_message}<|im_end|> <|im_start|>user
  {prompt}<|im_end|> <|im_start|>assistant
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/3XRfTOq.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
    <div style="display: flex; flex-direction: column; align-items: flex-end;">
        <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
    </div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat alpha ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- **Fine-Tuning**: SFT performed on convs/instructions for two epochs.
**v0.3**
- Function calling
- Reduced default system prompt to avoid wasting tokens (pre-alignment)
This model uses ChatML prompt format:
```
<|im_start|>system
Sei un assistente utile.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Usage:
```python
from transformers import (
    AutoTokenizer, 
    AutoModelForCausalLM, 
    GenerationConfig,
    TextStreamer
)
import torch
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.3-alpha")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.3-alpha", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
    do_sample=True,
    temperature=0.7,
    repetition_penalty=1.2,
    top_k=50,
    top_p=0.95,
    max_new_tokens=500,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
messages = [
    {"role": "system", "content": "Sei un assistente utile."},
    {"role": "user", "content": "{prompt}"}
]
with torch.no_grad(), torch.backends.cuda.sdp_kernel(
    enable_flash=True, 
    enable_math=False,
    enable_mem_efficient=False
):
    temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    inputs = tokenizer(temp, return_tensors="pt").to("cuda")
    streamer = TextStreamer(tokenizer, skip_prompt=True)
    _ = model.generate(
        **inputs,
        streamer=streamer,
        generation_config=gen
    )
```
## Intended uses & limitations
It's an alpha version, it's not `aligned`. It's a first test. We are working on alignment data and evals.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) | 
| 
	mathreader/dqn-SpaceInvadersNoFrameskip-v4-v2 | 
	mathreader | 2024-02-13T10:33:41Z | 0 | 0 | 
	stable-baselines3 | 
	[
  "stable-baselines3",
  "SpaceInvadersNoFrameskip-v4",
  "deep-reinforcement-learning",
  "reinforcement-learning",
  "model-index",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T10:33:12Z | 
	---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: SpaceInvadersNoFrameskip-v4
      type: SpaceInvadersNoFrameskip-v4
    metrics:
    - type: mean_reward
      value: 637.00 +/- 120.48
      name: mean_reward
      verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mathreader -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4  -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mathreader -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4  -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mathreader
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
             ('buffer_size', 100000),
             ('env_wrapper',
              ['stable_baselines3.common.atari_wrappers.AtariWrapper']),
             ('exploration_final_eps', 0.01),
             ('exploration_fraction', 0.1),
             ('frame_stack', 4),
             ('gradient_steps', 1),
             ('learning_rate', 0.0001),
             ('learning_starts', 100000),
             ('n_timesteps', 1000000.0),
             ('optimize_memory_usage', False),
             ('policy', 'CnnPolicy'),
             ('target_update_interval', 1000),
             ('train_freq', 4),
             ('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
 | 
| 
	MaziyarPanahi/sqlcoder-7b-GGUF | 
	MaziyarPanahi | 2024-02-13T10:29:08Z | 56 | 0 | 
	transformers | 
	[
  "transformers",
  "gguf",
  "mistral",
  "quantized",
  "2-bit",
  "3-bit",
  "4-bit",
  "5-bit",
  "6-bit",
  "8-bit",
  "GGUF",
  "pytorch",
  "text-generation",
  "code",
  "en",
  "license:cc-by-sa-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "has_space",
  "text-generation-inference",
  "region:us",
  "base_model:defog/sqlcoder-7b",
  "base_model:quantized:defog/sqlcoder-7b"
] | 
	text-generation | 2024-02-13T10:13:01Z | 
	---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- pytorch
- mistral
- text-generation
- code
- en
- license:cc-by-sa-4.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: sqlcoder-7b-GGUF
base_model: defog/sqlcoder-7b
inference: false
model_creator: defog
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/sqlcoder-7b-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-GGUF)
- Model creator: [defog](https://huggingface.co/defog)
- Original model: [defog/sqlcoder-7b](https://huggingface.co/defog/sqlcoder-7b)
## Description
[MaziyarPanahi/sqlcoder-7b-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-GGUF) contains GGUF format model files for [defog/sqlcoder-7b](https://huggingface.co/defog/sqlcoder-7b).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
  <summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/sqlcoder-7b-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-GGUF) and below it, a specific filename to download, such as: sqlcoder-7b-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/sqlcoder-7b-GGUF sqlcoder-7b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
  <summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/sqlcoder-7b-GGUF](https://huggingface.co/MaziyarPanahi/sqlcoder-7b-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/sqlcoder-7b-GGUF sqlcoder-7b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m sqlcoder-7b-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
  model_path="./sqlcoder-7b-GGUF.Q4_K_M.gguf",  # Download the model file first
  n_ctx=32768,  # The max sequence length to use - note that longer sequence lengths require much more resources
  n_threads=8,            # The number of CPU threads to use, tailor to your system and the resulting performance
  n_gpu_layers=35         # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
  "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
  max_tokens=512,  # Generate up to 512 tokens
  stop=["</s>"],   # Example stop token - not necessarily correct for this specific model! Please check before using.
  echo=True        # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./sqlcoder-7b-GGUF.Q4_K_M.gguf", chat_format="llama-2")  # Set chat_format according to the model you are using
llm.create_chat_completion(
    messages = [
        {"role": "system", "content": "You are a story writing assistant."},
        {
            "role": "user",
            "content": "Write a story about llamas."
        }
    ]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) | 
| 
	perler/ppsurf | 
	perler | 2024-02-13T10:14:09Z | 0 | 0 | null | 
	[
  "en",
  "license:mit",
  "region:us"
] | null | 2024-02-13T09:56:48Z | 
	---
license: mit
language:
- en
metrics:
- f1
--- | 
| 
	wild-chimpanzee-foundation/uniformerv2_large-clip-k710-pre-k400_cb-focal-loss | 
	wild-chimpanzee-foundation | 2024-02-13T10:09:57Z | 0 | 0 | null | 
	[
  "video-classification",
  "en",
  "license:mit",
  "region:us"
] | 
	video-classification | 2023-12-05T12:51:40Z | 
	---
license: mit
language:
- en
pipeline_tag: video-classification
---
# Model Card for UniformerV2 
<!-- Provide a quick summary of what the model is/does. -->
UniformerV2 is a large transformer-based model trained on a binary classification task. Specifically, it is trained to detect whether the input video contains a chimpanzee(s) exhibiting a reaction to the presence of a camera trap.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
UniformerV2 is a large transformer-based model trained on a binary classification task. Specifically, it is trained to detect whether the input video contains a chimpanzee(s) exhibiting a reaction to the presence of a camera trap. As the dataset heavily favours videos exhibiting no reaction to the camera, we employ a class-balanced focal loss to address the class imbalance.
- **Developed by:** Otto Brookes, Christophe Boesch, Hjalmar S. Kühl, Majid Mirmehdi, Tilo Burghardt
- **Model type:** Vision Transformer, UniformerV2
- **License:** MIT
## Training Details
### Training Data
It is trained on camera trap video footage from 15 different countries in Africa as part of the The Pan African Programme: The Cultured Chimpanzee.
### Results
We use mean average precision to evaluate models 
| Dataset   | Model      | Loss       | mAP (%) |
|-----------|------------|------------|---------|
| PanAf     | Uniformer  | CB Focal   | 87.82%  | | 
| 
	MaziyarPanahi/samantha-1.1-westlake-7b-GPTQ | 
	MaziyarPanahi | 2024-02-13T10:05:45Z | 75 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "finetuned",
  "quantized",
  "4-bit",
  "gptq",
  "pytorch",
  "conversational",
  "dataset:cognitivecomputations/samantha-data",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "text-generation-inference",
  "region:us",
  "base_model:cognitivecomputations/samantha-1.1-westlake-7b",
  "base_model:finetune:cognitivecomputations/samantha-1.1-westlake-7b"
] | 
	text-generation | 2024-02-13T10:03:37Z | 
	---
license: apache-2.0
tags:
- finetuned
- quantized
- 4-bit
- gptq
- transformers
- pytorch
- mistral
- text-generation
- conversational
- dataset:cognitivecomputations/samantha-data
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
model_name: samantha-1.1-westlake-7b-GPTQ
base_model: cognitivecomputations/samantha-1.1-westlake-7b
inference: false
model_creator: cognitivecomputations
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# Description
[MaziyarPanahi/samantha-1.1-westlake-7b-GPTQ](https://huggingface.co/MaziyarPanahi/samantha-1.1-westlake-7b-GPTQ) is a quantized (GPTQ) version of [cognitivecomputations/samantha-1.1-westlake-7b](https://huggingface.co/cognitivecomputations/samantha-1.1-westlake-7b)
## How to use
### Install the necessary packages
```
pip install --upgrade accelerate auto-gptq transformers
```
### Example Python code
```python
from transformers import AutoTokenizer, pipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import torch
model_id = "MaziyarPanahi/samantha-1.1-westlake-7b-GPTQ"
quantize_config = BaseQuantizeConfig(
        bits=4,
        group_size=128,
        desc_act=False
    )
model = AutoGPTQForCausalLM.from_quantized(
        model_id,
        use_safetensors=True,
        device="cuda:0",
        quantize_config=quantize_config)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.95,
    repetition_penalty=1.1
)
outputs = pipe("What is a large language model?")
print(outputs[0]["generated_text"])
``` | 
| 
	MaziyarPanahi/natural-sql-7b-GGUF | 
	MaziyarPanahi | 2024-02-13T10:04:12Z | 59 | 2 | 
	transformers | 
	[
  "transformers",
  "gguf",
  "mistral",
  "quantized",
  "2-bit",
  "3-bit",
  "4-bit",
  "5-bit",
  "6-bit",
  "8-bit",
  "GGUF",
  "safetensors",
  "llama",
  "text-generation",
  "instruct",
  "finetune",
  "conversational",
  "base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
  "license:cc-by-sa-4.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "has_space",
  "text-generation-inference",
  "region:us",
  "base_model:chatdb/natural-sql-7b",
  "base_model:quantized:chatdb/natural-sql-7b"
] | 
	text-generation | 2024-02-13T09:48:46Z | 
	---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- llama
- text-generation
- instruct
- finetune
- conversational
- base_model:deepseek-ai/deepseek-coder-6.7b-instruct
- license:cc-by-sa-4.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: natural-sql-7b-GGUF
base_model: chatdb/natural-sql-7b
inference: false
model_creator: chatdb
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/natural-sql-7b-GGUF](https://huggingface.co/MaziyarPanahi/natural-sql-7b-GGUF)
- Model creator: [chatdb](https://huggingface.co/chatdb)
- Original model: [chatdb/natural-sql-7b](https://huggingface.co/chatdb/natural-sql-7b)
## Description
[MaziyarPanahi/natural-sql-7b-GGUF](https://huggingface.co/MaziyarPanahi/natural-sql-7b-GGUF) contains GGUF format model files for [chatdb/natural-sql-7b](https://huggingface.co/chatdb/natural-sql-7b).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
  <summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/natural-sql-7b-GGUF](https://huggingface.co/MaziyarPanahi/natural-sql-7b-GGUF) and below it, a specific filename to download, such as: natural-sql-7b-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/natural-sql-7b-GGUF natural-sql-7b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
  <summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/natural-sql-7b-GGUF](https://huggingface.co/MaziyarPanahi/natural-sql-7b-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/natural-sql-7b-GGUF natural-sql-7b-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m natural-sql-7b-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
  model_path="./natural-sql-7b-GGUF.Q4_K_M.gguf",  # Download the model file first
  n_ctx=32768,  # The max sequence length to use - note that longer sequence lengths require much more resources
  n_threads=8,            # The number of CPU threads to use, tailor to your system and the resulting performance
  n_gpu_layers=35         # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
  "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
  max_tokens=512,  # Generate up to 512 tokens
  stop=["</s>"],   # Example stop token - not necessarily correct for this specific model! Please check before using.
  echo=True        # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./natural-sql-7b-GGUF.Q4_K_M.gguf", chat_format="llama-2")  # Set chat_format according to the model you are using
llm.create_chat_completion(
    messages = [
        {"role": "system", "content": "You are a story writing assistant."},
        {
            "role": "user",
            "content": "Write a story about llamas."
        }
    ]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) | 
| 
	Doniaa/tryMModel | 
	Doniaa | 2024-02-13T10:02:07Z | 33 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "roberta",
  "text-generation",
  "generated_from_trainer",
  "base_model:distilbert/distilroberta-base",
  "base_model:finetune:distilbert/distilroberta-base",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T10:01:45Z | 
	---
license: apache-2.0
base_model: distilroberta-base
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
 | 
| 
	ryatora/distilbert-base-uncased-finetuned-clinc | 
	ryatora | 2024-02-13T10:00:29Z | 90 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "tensorboard",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:clinc_oos",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T03:11:24Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: clinc_oos
      type: clinc_oos
      args: plus
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9170967741935484
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7778
- Accuracy: 0.9171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log        | 1.0   | 318  | 3.2778          | 0.7390   |
| 3.7833        | 2.0   | 636  | 1.8740          | 0.8287   |
| 3.7833        | 3.0   | 954  | 1.1618          | 0.8894   |
| 1.6893        | 4.0   | 1272 | 0.8600          | 0.9090   |
| 0.9056        | 5.0   | 1590 | 0.7778          | 0.9171   |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu121
- Datasets 1.16.1
- Tokenizers 0.15.1
 | 
| 
	tiredbear/distilbert-base-uncased-finetuned-emotion | 
	tiredbear | 2024-02-13T09:52:05Z | 94 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "dataset:emotion",
  "base_model:distilbert/distilbert-base-uncased",
  "base_model:finetune:distilbert/distilbert-base-uncased",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T09:42:11Z | 
	---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: emotion
      type: emotion
      config: split
      split: validation
      args: split
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.9255
    - name: F1
      type: f1
      value: 0.9256033121528526
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2098
- Accuracy: 0.9255
- F1: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8453        | 1.0   | 250  | 0.3061          | 0.91     | 0.9094 |
| 0.2487        | 2.0   | 500  | 0.2098          | 0.9255   | 0.9256 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
 | 
| 
	shafi4/my-pet-cat-xzg | 
	shafi4 | 2024-02-13T09:47:23Z | 0 | 1 | 
	diffusers | 
	[
  "diffusers",
  "safetensors",
  "NxtWave-GenAI-Webinar",
  "text-to-image",
  "stable-diffusion",
  "license:creativeml-openrail-m",
  "autotrain_compatible",
  "endpoints_compatible",
  "diffusers:StableDiffusionPipeline",
  "region:us"
] | 
	text-to-image | 2024-02-13T09:42:57Z | 
	---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-cat-XZG Dreambooth model trained by shafi4 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21KT1A0559
Sample pictures of this concept:
 | 
| 
	ksh-nyp/results_tcm_faq | 
	ksh-nyp | 2024-02-13T09:42:59Z | 0 | 0 | null | 
	[
  "generated_from_trainer",
  "base_model:NousResearch/Llama-2-7b-chat-hf",
  "base_model:finetune:NousResearch/Llama-2-7b-chat-hf",
  "region:us"
] | null | 2024-02-13T09:01:49Z | 
	---
base_model: NousResearch/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: results_tcm_faq
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_tcm_faq
This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
 | 
| 
	elderberry17/base-pokemon-finetuned | 
	elderberry17 | 2024-02-13T09:42:27Z | 48 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "blip",
  "image-text-to-text",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | 
	image-text-to-text | 2024-02-12T14:49:11Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	kenchenxingyu/flan-large-lora-stance-human6 | 
	kenchenxingyu | 2024-02-13T09:35:42Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T09:35:38Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	Gordon119/TAT-openai-whisper-large-v3-Lora-ContinualTraining-epoch4-total5epoch | 
	Gordon119 | 2024-02-13T09:28:56Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-02T16:50:47Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	kouki13/llama2 | 
	kouki13 | 2024-02-13T09:27:18Z | 0 | 0 | null | 
	[
  "safetensors",
  "autotrain",
  "text-generation",
  "license:other",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T09:26:40Z | 
	---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` | 
| 
	malteos/hermeo-7b | 
	malteos | 2024-02-13T09:26:55Z | 23 | 17 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "merge",
  "mergekit",
  "en",
  "de",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2023-12-12T20:35:41Z | 
	---
language:
  - en
  - de
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
tags:
  - merge
  - mergekit
---

_Hermes + Leo = Hermeo_
# Hermeo-7B
A German-English language model merged from [DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2) and [leo-mistral-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b-chat) using [mergekit](https://github.com/cg123/mergekit).
Both base models are fine-tuned versions of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
### Model details
- **Merged from:** [leo-mistral-hessianai-7b-chat](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b-chat) and [DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
- **Model type:** Causal decoder-only transformer language model
- **Languages:** English and German
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='malteos/hermeo-7b')
>>> set_seed(42)
>>> generator("Hallo, Ich bin ein Sprachmodell,", max_length=40, num_return_sequences=1)
[{'generated_text': 'Hallo, Ich bin ein Sprachmodell, das dir bei der Übersetzung von Texten zwischen Deutsch und Englisch helfen kann. Wenn du mir einen Text in Deutsch'}]
```
### Acknowledgements
- This model release is heavily inspired by [Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-2-Slerp)
- Thanks to the authors of the base models: [Mistral](https://mistral.ai/), [LAION](https://laion.ai/), [HessianAI](https://hessian.ai/), [Open Access AI Collective](https://huggingface.co/openaccess-ai-collective), [@teknium](https://huggingface.co/teknium), [@bjoernp](https://huggingface.co/bjoernp)
- The [German evaluation datasets and scripts](https://github.com/bjoernpl/lm-evaluation-harness-de/tree/mmlu_de) from [@bjoernp](https://huggingface.co/bjoernp) were used.
- The computing resources from [DFKI's PEGASUS cluster](https://pegasus.dfki.de/) were used for the evaluation.
## Evaluation
The evaluation methdology of the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) is followed.
### German benchmarks
| **German tasks:**             | **MMLU-DE**    | **Hellaswag-DE** | **ARC-DE**      |**Average**      |
|-------------------------------|-------------|---------------|--------------|--------------|
| **Models / Few-shots:**       | _(5 shots)_ | _(10 shots)_  | _(24 shots)_ | |
| _7B parameters_      |  | |  | |
| llama-2-7b                    | 0.400       | 0.513         | 0.381        | 0.431  |
| leo-hessianai-7b              | 0.400       | 0.609         | 0.429        | 0.479 |
| bloom-6b4-clp-german          | 0.274       | 0.550         | 0.351        | 0.392 |
| mistral-7b                    | **0.524**       | 0.588         | 0.473        | 0.528 |
| leo-mistral-hessianai-7b      | 0.481       | 0.663         | 0.485        | 0.543 |
| leo-mistral-hessianai-7b-chat | 0.458       | 0.617         | 0.465        | 0.513 |
| DPOpenHermes-7B-v2            | 0.517         | 0.603         | 0.515        | 0.545 |
| hermeo-7b (this model)        | 0.511       | **0.668**         | **0.528**        | **0.569** |
| _13B parameters_      |  | |  | |
| llama-2-13b                    | 0.469       | 0.581        | 0.468        | 0.506 |
| leo-hessianai-13b              | **0.486**       | **0.658**         | **0.509**       | **0.551** |
| _70B parameters_      |  | |  | |
| llama-2-70b                    | 0.597       | 0.674       | 0.561       | 0.611 |
| leo-hessianai-70b              | **0.653**       | **0.721**         | **0.600**       | **0.658** |
### English benchmarks
| **English tasks:**         | **MMLU**    | **Hellaswag** | **ARC**      | **Average** |
|----------------------------|-------------|---------------|--------------|-------------|
| **Models / Few-shots:**    | _(5 shots)_ | _(10 shots)_  | _(24 shots)_ |             |
| llama-2-7b                 |       0.466 |         0.786 |        0.530 |       0.594 |
| leolm-hessianai-7b         |       0.423 |         0.759 |        0.522 |       0.568 |
| bloom-6b4-clp-german       |       0.264 |         0.525 |        0.328 |       0.372 |
| mistral-7b                 |   **0.635** |     **0.832** |        0.607 |   **0.691** |
| leolm-mistral-hessianai-7b |       0.550 |         0.777 |        0.518 |       0.615 |
| hermeo-7b (this model)     |       0.601 |         0.821 |    **0.620** |       0.681 |
## Prompting / Prompt Template
Prompt dialogue template (ChatML format):
```
"""
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
"""
```
The model input can contain multiple conversation turns between user and assistant, e.g.
```
<|im_start|>user
{prompt 1}<|im_end|>
<|im_start|>assistant
{reply 1}<|im_end|>
<|im_start|>user
{prompt 2}<|im_end|>
<|im_start|>assistant
(...)
```
## License
[Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0.html)
## See also
- AWQ quantized version: https://huggingface.co/mayflowergmbh/hermeo-7b-awq
 | 
| 
	GregoRio123/pyt | 
	GregoRio123 | 2024-02-13T09:26:18Z | 0 | 0 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T08:27:31Z | 
	---
license: creativeml-openrail-m
---
 | 
| 
	aidonuts/pernicious-001-ep2 | 
	aidonuts | 2024-02-13T09:22:29Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "conversational",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T09:21:33Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	gK29382231121/distilbert-base-uncased-finetuned-emotion | 
	gK29382231121 | 2024-02-13T09:22:18Z | 92 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "distilbert",
  "text-classification",
  "generated_from_trainer",
  "base_model:distilbert/distilbert-base-uncased",
  "base_model:finetune:distilbert/distilbert-base-uncased",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T09:22:10Z | 
	---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2269
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1     |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8758        | 1.0   | 250  | 0.3253          | 0.905    | 0.9045 |
| 0.2571        | 2.0   | 500  | 0.2269          | 0.9215   | 0.9216 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu118
- Datasets 2.17.0
- Tokenizers 0.15.2
 | 
| 
	shafi4/my-pet-cat | 
	shafi4 | 2024-02-13T09:21:17Z | 0 | 0 | null | 
	[
  "safetensors",
  "NxtWave-GenAI-Webinar",
  "text-to-image",
  "stable-diffusion",
  "license:creativeml-openrail-m",
  "region:us"
] | 
	text-to-image | 2024-02-13T09:19:09Z | 
	---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-CAT Dreambooth model trained by shafi4 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21KT1A0559
Sample pictures of this concept:
 | 
| 
	Kruti23/christmas-tree | 
	Kruti23 | 2024-02-13T09:17:42Z | 0 | 2 | 
	diffusers | 
	[
  "diffusers",
  "safetensors",
  "NxtWave-GenAI-Webinar",
  "text-to-image",
  "stable-diffusion",
  "license:creativeml-openrail-m",
  "autotrain_compatible",
  "endpoints_compatible",
  "diffusers:StableDiffusionPipeline",
  "region:us"
] | 
	text-to-image | 2024-02-13T09:10:55Z | 
	---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Christmas-Tree Dreambooth model trained by Kruti23 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 112110090
Sample pictures of this concept:
  .png)
      
 | 
| 
	arshsin/whisper-tiny-finetuned-minds14 | 
	arshsin | 2024-02-13T09:11:43Z | 62 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "dataset:PolyAI/minds14",
  "base_model:openai/whisper-tiny",
  "base_model:finetune:openai/whisper-tiny",
  "license:apache-2.0",
  "model-index",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T09:11:32Z | 
	---
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds14
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: PolyAI/minds14
      type: PolyAI/minds14
      config: en-US
      split: train
      args: en-US
    metrics:
    - name: Wer
      type: wer
      value: 0.3624031007751938
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6785
- Wer Ortho: 0.3607
- Wer: 0.3624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer    |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 3.8342        | 1.0   | 28   | 2.7013          | 0.4859    | 0.3669 |
| 1.52          | 2.0   | 56   | 0.6447          | 0.3822    | 0.3624 |
| 0.4282        | 3.0   | 84   | 0.5154          | 0.3573    | 0.3521 |
| 0.2511        | 4.0   | 112  | 0.5017          | 0.3452    | 0.3430 |
| 0.1461        | 5.0   | 140  | 0.5106          | 0.3620    | 0.3572 |
| 0.0829        | 6.0   | 168  | 0.5399          | 0.3641    | 0.3592 |
| 0.0423        | 7.0   | 196  | 0.5596          | 0.3573    | 0.3527 |
| 0.0199        | 8.0   | 224  | 0.5846          | 0.3627    | 0.3598 |
| 0.0093        | 9.0   | 252  | 0.6006          | 0.3594    | 0.3572 |
| 0.0056        | 10.0  | 280  | 0.6207          | 0.3345    | 0.3301 |
| 0.0037        | 11.0  | 308  | 0.6238          | 0.3560    | 0.3534 |
| 0.0021        | 12.0  | 336  | 0.6377          | 0.3486    | 0.3482 |
| 0.0016        | 13.0  | 364  | 0.6485          | 0.3594    | 0.3579 |
| 0.0013        | 14.0  | 392  | 0.6621          | 0.3567    | 0.3572 |
| 0.0011        | 15.0  | 420  | 0.6617          | 0.3587    | 0.3605 |
| 0.0009        | 16.0  | 448  | 0.6682          | 0.3560    | 0.3559 |
| 0.0008        | 17.0  | 476  | 0.6741          | 0.3627    | 0.3624 |
| 0.0008        | 17.86 | 500  | 0.6785          | 0.3607    | 0.3624 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
 | 
| 
	hoanghoavienvo/roberta-base-detect-cheapfake-combined-train-test-contradict-context | 
	hoanghoavienvo | 2024-02-13T09:10:47Z | 92 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "roberta",
  "text-classification",
  "generated_from_trainer",
  "base_model:FacebookAI/roberta-base",
  "base_model:finetune:FacebookAI/roberta-base",
  "license:mit",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2024-02-13T08:53:52Z | 
	---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-detect-cheapfake-combined-train-test-contradict-context
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-detect-cheapfake-combined-train-test-contradict-context
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
 | 
| 
	haihuynh/ppo-SnowballTarget | 
	haihuynh | 2024-02-13T09:10:11Z | 0 | 0 | 
	ml-agents | 
	[
  "ml-agents",
  "tensorboard",
  "onnx",
  "SnowballTarget",
  "deep-reinforcement-learning",
  "reinforcement-learning",
  "ML-Agents-SnowballTarget",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T09:10:08Z | 
	---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
  # **ppo** Agent playing **SnowballTarget**
  This is a trained model of a **ppo** agent playing **SnowballTarget**
  using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
  ## Usage (with ML-Agents)
  The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
  We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
  - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
  browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
  - A *longer tutorial* to understand how works ML-Agents:
  https://huggingface.co/learn/deep-rl-course/unit5/introduction
  ### Resume the training
  ```bash
  mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
  ```
  ### Watch your Agent play
  You can watch your agent **playing directly in your browser**
  1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
  2. Step 1: Find your model_id: haihuynh/ppo-SnowballTarget
  3. Step 2: Select your *.nn /*.onnx file
  4. Click on Watch the agent play 👀
   | 
| 
	santoshdahal/whisper-medium-nepali | 
	santoshdahal | 2024-02-13T09:01:53Z | 62 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "whisper",
  "automatic-speech-recognition",
  "hf-asr-leaderboard",
  "generated_from_trainer",
  "np",
  "dataset:mozilla-foundation/common_voice_11_0",
  "base_model:openai/whisper-medium",
  "base_model:finetune:openai/whisper-medium",
  "license:apache-2.0",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-02-13T08:49:03Z | 
	---
language:
- np
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: santoshdahal/whispher-ne-medium
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# santoshdahal/whispher-ne-medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
 | 
| 
	RMWeerasinghe/t5-small-finetuned-2048 | 
	RMWeerasinghe | 2024-02-13T09:00:32Z | 98 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "t5",
  "text2text-generation",
  "summarization",
  "generated_from_trainer",
  "dataset:RMWeerasinghe/BoardPapers-small",
  "base_model:google-t5/t5-small",
  "base_model:finetune:google-t5/t5-small",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	summarization | 2024-02-13T06:10:02Z | 
	---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-2048
  results: []
pipeline_tag: summarization
datasets:
- RMWeerasinghe/BoardPapers-small
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-2048
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 13.3433
- Rouge1: 0.029
- Rouge2: 0.0023
- Rougel: 0.0267
- Rougelsum: 0.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| No log        | 0.67  | 1    | 25.1883         | 0.0242 | 0.0023 | 0.0218 | 0.0241    |
| No log        | 2.0   | 3    | 23.4392         | 0.0242 | 0.0023 | 0.0218 | 0.0241    |
| No log        | 2.67  | 4    | 22.5166         | 0.0252 | 0.0023 | 0.0229 | 0.0251    |
| No log        | 4.0   | 6    | 20.6643         | 0.0252 | 0.0023 | 0.0229 | 0.0251    |
| No log        | 4.67  | 7    | 19.7334         | 0.0252 | 0.0023 | 0.0229 | 0.0251    |
| No log        | 6.0   | 9    | 17.8137         | 0.0252 | 0.0023 | 0.0229 | 0.0251    |
| No log        | 6.67  | 10   | 17.1117         | 0.0252 | 0.0023 | 0.0229 | 0.0251    |
| No log        | 8.0   | 12   | 16.4384         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| No log        | 8.67  | 13   | 16.2401         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| No log        | 10.0  | 15   | 15.9056         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| No log        | 10.67 | 16   | 15.7547         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| No log        | 12.0  | 18   | 15.4599         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| No log        | 12.67 | 19   | 15.3192         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| 17.3983       | 14.0  | 21   | 15.0513         | 0.0329 | 0.005  | 0.0269 | 0.0324    |
| 17.3983       | 14.67 | 22   | 14.9270         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 16.0  | 24   | 14.7037         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 16.67 | 25   | 14.5987         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 18.0  | 27   | 14.4010         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 18.67 | 28   | 14.3084         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 20.0  | 30   | 14.1348         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 20.67 | 31   | 14.0554         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 22.0  | 33   | 13.9103         | 0.0367 | 0.005  | 0.0307 | 0.0357    |
| 17.3983       | 22.67 | 34   | 13.8446         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 17.3983       | 24.0  | 36   | 13.7251         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 17.3983       | 24.67 | 37   | 13.6713         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 17.3983       | 26.0  | 39   | 13.5781         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 26.67 | 40   | 13.5376         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 28.0  | 42   | 13.4689         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 28.67 | 43   | 13.4408         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 30.0  | 45   | 13.3953         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 30.67 | 46   | 13.3780         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 32.0  | 48   | 13.3538         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 32.67 | 49   | 13.3468         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
| 13.2153       | 33.33 | 50   | 13.3433         | 0.029  | 0.0023 | 0.0267 | 0.0284    |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.1 | 
| 
	llmware/bling-sheared-llama-1.3b-0.1 | 
	llmware | 2024-02-13T08:59:27Z | 192 | 25 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "llama",
  "text-generation",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "region:us"
] | 
	text-generation | 2023-10-22T17:03:12Z | 
	---
license: apache-2.0  
inference: false  
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base model.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with 
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even 
without using any advanced quantization optimizations.
### Benchmark Tests  
Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  
--**Accuracy Score**:  **84.50** correct out of 100  
--Not Found Classification:  20.0%  
--Boolean:  66.25%  
--Math/Logic:  9.4%  
--Complex Questions (1-5):  1 (Low)  
--Summarization Quality (1-5):  3 (Coherent, extractive)  
--Hallucinations:  No hallucinations observed in test runs.  
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Instruct-trained decoder 
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B
  
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1.  Provide high-quality Instruct models that can run on a laptop for local testing.  We have found it extremely useful when building a
   proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2.  Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
    automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")  
    model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")  
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
    full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])
    #   temperature: set at 0.3 for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries
    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=True,
            temperature=0.3,
            max_new_tokens=100,
            )
    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  
    #   note: due to artifact of the fine-tuning, use this post-processing with HF generation 
    eot = output_only.find("<|endoftext|>")
    if eot > -1:
        output_only = output_only[:eot]
## Citation [optional]
This BLING model was built on top of a "Sheared Llama" model base - for more information about the "Sheared Llama" model, please see the paper referenced below:
@article{xia2023sheared,
   title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
   author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
   year={2023}
}
## Model Card Contact
Darren Oberst & llmware team
 | 
| 
	llmware/bling-falcon-1b-0.1 | 
	llmware | 2024-02-13T08:57:51Z | 41 | 12 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "falcon",
  "text-generation",
  "custom_code",
  "arxiv:2306.01116",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "region:us"
] | 
	text-generation | 2023-10-08T10:20:55Z | 
	---
license: apache-2.0  
inference: false  
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
bling-falcon-1b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a falcon-rw-1b base model.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with 
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even 
without using any advanced quantization optimizations.
### Benchmark Tests  
Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  
--**Accuracy Score**:  **89.0** correct out of 100  
--Not Found Classification:  57.5%  
--Boolean:  57.5%  
--Math/Logic:  25%  
--Complex Questions (1-5):  1 (Low)  
--Summarization Quality (1-5):  3 (Coherent, extractive)  
--Hallucinations:  No hallucinations observed in test runs.  
Please note that these scoring results have been updated from the original (upward), as we corrected a small bug in the original test inference script for this model.  
The corrected test results are in the files repo, and have been generated with the test scripts in the repo.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GPTNeoX instruct-trained decoder
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** tiiuae/falcon-rw-1b
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1.  Provide high-quality Instruct models that can run on a laptop for local testing.  We have found it extremely useful when building a
   proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2.  Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
    automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("llmware/bling-falcon-1b-0.1")  
    model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")  
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
    full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])
    #   temperature: set at 0.3 for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries
    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=True,
            temperature=0.3,
            max_new_tokens=100,
            )
    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  
    
    
## Citation [optional]
This BLING model was built on top of a Falcon model base - for more information about the Falcon model, please see the paper referenced below:
@article{refinedweb,
  title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
  author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
  journal={arXiv preprint arXiv:2306.01116},
  eprint={2306.01116},
  eprinttype = {arXiv},
  url={https://arxiv.org/abs/2306.01116},
  year={2023}
}
## Model Card Contact
Darren Oberst & llmware team
 | 
| 
	llmware/bling-cerebras-1.3b-0.1 | 
	llmware | 2024-02-13T08:55:26Z | 19 | 4 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt2",
  "text-generation",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "region:us"
] | 
	text-generation | 2023-10-08T10:00:51Z | 
	---
license: apache-2.0  
inference: false  
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
BLING-cerebras-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, with instruct training on top of the cerebras/Cerebras-GPT-1.3B base.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with 
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even 
without using any advanced quantization optimizations.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Instruct-trained GPT decoder
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** cerebras/Cerebras-GPT-1.3B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1.  Provide high-quality Instruct models that can run on a laptop for local testing.  We have found it extremely useful when building a
   proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2.  Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
    automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("llmware/bling-cerebras-1.3b-0.1")  
    model = AutoModelForCausalLM.from_pretrained("llmware/bling-cerebras-1.3b-0.1")  
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
    full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])
    #   temperature: set at 0.3 for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries
    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=True,
            temperature=0.3,
            max_new_tokens=100,
            )
    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)  
   
## Citation [optional]
This BLING model is built on top of a Cerebras base GPT trained model - for more information about the Cerebras GPT models, please see the following paper:
{
Title:  Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster
Authors: Nolan Dey, Gurpreet Gosal, Zhiming (Charles) Chen, Hemant Khachane, William Marshall, Ribhu Pathria, Marvin Tom, Joe Hestness
Publication:  April 6, 2023
}
## Model Card Contact
Darren Oberst & llmware team
 | 
| 
	llmware/bling-1.4b-0.1 | 
	llmware | 2024-02-13T08:54:45Z | 89 | 19 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "gpt_neox",
  "text-generation",
  "arxiv:2304.01373",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "region:us"
] | 
	text-generation | 2023-09-29T22:46:59Z | 
	---
license: apache-2.0  
inference: false  
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
BLING-1.4b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with 
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even 
without using any advanced quantization optimizations.
### Benchmark Tests  
Evaluated against the benchmark test:   [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)  
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.  
--**Accuracy Score**:  **82.25** correct out of 100  
--Not Found Classification:  40.0%  
--Boolean:  61.25%  
--Math/Logic:  8.75%  
--Complex Questions (1-5):  1 (Low)  
--Summarization Quality (1-5):  2 (Coherent, extractive)  
--Hallucinations:  No hallucinations observed in test runs.  
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
--As a reference point, this model shows substantial improvements in results, compared with the BLING 1.0B Pythia, with fine-tuning and the base training substantially the same.  The model's ability to follow instructions and answer detailed questions improves dramatically from 1.0B -> 1.4B parameters.
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** GPTNeoX instruct-trained decoder
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** EleutherAI/Pythia-1.4b-v0
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1.  Provide high-quality Instruct models that can run on a laptop for local testing.  We have found it extremely useful when building a
   proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2.  Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
    automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources.  Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically:   question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
Please refer to the benchmark score and testing results for indicator as to the applicability of this model to your intended use case.   
We have found that this model is reasonably effective and accurate for fact-based, extractive tasks, including key-value, question-answering, and basic summarization.  
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
    from transformers import AutoTokenizer, AutoModelForCausalLM  
    tokenizer = AutoTokenizer.from_pretrained("llmware/bling-1.4b-0.1")  
    model = AutoModelForCausalLM.from_pretrained("llmware/bling-1.4b-0.1")  
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model.  The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.  
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
    full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1.  Text Passage Context, and
2.  Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
    my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
    # prepare prompt packaging used in fine-tuning process
    new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
    inputs = tokenizer(new_prompt, return_tensors="pt")  
    start_of_output = len(inputs.input_ids[0])
    #   temperature: set at 0.3 for consistency of output
    #   max_new_tokens:  set at 100 - may prematurely stop a few of the summaries
    outputs = model.generate(
            inputs.input_ids.to(device),
            eos_token_id=tokenizer.eos_token_id,
            pad_token_id=tokenizer.eos_token_id,
            do_sample=True,
            temperature=0.3,
            max_new_tokens=100,
            )
    output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
## Citation [optional]
BLING models are built on top of EleutherAI/Pythia base - please see citation for Pythia below:
@misc{biderman2023pythia,
      title={Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling}, 
      author={Stella Biderman and Hailey Schoelkopf and Quentin Anthony and Herbie Bradley and Kyle O'Brien and Eric Hallahan and Mohammad Aflah Khan and Shivanshu Purohit and USVSN Sai Prashanth and Edward Raff and Aviya Skowron and Lintang Sutawika and Oskar van der Wal},
      year={2023},
      eprint={2304.01373},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
## Model Card Contact
Darren Oberst & llmware team
 | 
| 
	SJ182120/l2_python | 
	SJ182120 | 2024-02-13T08:52:42Z | 0 | 0 | 
	peft | 
	[
  "peft",
  "text-generation",
  "region:us"
] | 
	text-generation | 2024-02-13T08:51:31Z | 
	---
library_name: peft
pipeline_tag: text-generation
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0 | 
| 
	yeniceriSGK/falcon-1b-pibrain-v2 | 
	yeniceriSGK | 2024-02-13T08:51:55Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T08:51:54Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	AyushRaj01/zephyr-support-chatbot | 
	AyushRaj01 | 2024-02-13T08:42:04Z | 2 | 0 | 
	peft | 
	[
  "peft",
  "tensorboard",
  "safetensors",
  "trl",
  "sft",
  "generated_from_trainer",
  "base_model:TheBloke/zephyr-7B-alpha-GPTQ",
  "base_model:adapter:TheBloke/zephyr-7B-alpha-GPTQ",
  "license:mit",
  "region:us"
] | null | 2024-01-19T08:03:13Z | 
	---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/zephyr-7B-alpha-GPTQ
model-index:
- name: zephyr-support-chatbot
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 | 
| 
	Oysiyl/w2v-bert-2.0-ukrainian-colab-CV16.0 | 
	Oysiyl | 2024-02-13T08:39:55Z | 124 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "wav2vec2-bert",
  "automatic-speech-recognition",
  "generated_from_trainer",
  "uk",
  "dataset:mozilla-foundation/common_voice_16_1",
  "base_model:ylacombe/w2v-bert-2.0",
  "base_model:finetune:ylacombe/w2v-bert-2.0",
  "license:mit",
  "model-index",
  "endpoints_compatible",
  "region:us"
] | 
	automatic-speech-recognition | 2024-01-30T20:30:32Z | 
	---
base_model: ylacombe/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-ukrainian-colab-CV16.0
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: mozilla-foundation/common_voice_16_1
      type: mozilla-foundation/common_voice_16_1
      config: uk
      split: test
      args: uk
    metrics:
    - name: Wer
      type: wer
      value: 0.0987
license: mit
datasets:
- mozilla-foundation/common_voice_16_1
language:
- uk
pipeline_tag: automatic-speech-recognition
library_name: transformers
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-ukrainian-colab-CV16.0
This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1438
- Wer: 0.0987
Note: the model was finetuned on Ukrainian alphabet in lowercase plus "'" sign. Therefore this model can't add punctuation or capitalization.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer    |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0371        | 1.98  | 525  | 0.1509          | 0.1498 |
| 0.0728        | 3.96  | 1050 | 0.1256          | 0.1279 |
| 0.0382        | 5.94  | 1575 | 0.1260          | 0.1041 |
| 0.0213        | 7.92  | 2100 | 0.1333          | 0.0997 |
| 0.0118        | 9.91  | 2625 | 0.1438          | 0.0987 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.1 | 
| 
	haihuynh/Reinforce-Pixelcopter-PLE-v0 | 
	haihuynh | 2024-02-13T08:32:48Z | 0 | 0 | null | 
	[
  "Pixelcopter-PLE-v0",
  "reinforce",
  "reinforcement-learning",
  "custom-implementation",
  "deep-rl-class",
  "model-index",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T08:32:43Z | 
	---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: Pixelcopter-PLE-v0
      type: Pixelcopter-PLE-v0
    metrics:
    - type: mean_reward
      value: 35.10 +/- 27.01
      name: mean_reward
      verified: false
---
  # **Reinforce** Agent playing **Pixelcopter-PLE-v0**
  This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
  To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
   | 
| 
	Wiredwizard/test | 
	Wiredwizard | 2024-02-13T08:29:21Z | 0 | 0 | null | 
	[
  "license:creativeml-openrail-m",
  "region:us"
] | null | 2024-02-13T08:29:21Z | 
	---
license: creativeml-openrail-m
---
 | 
| 
	Khemmanat/ppo-LunarLander-v2 | 
	Khemmanat | 2024-02-13T08:25:43Z | 0 | 0 | 
	stable-baselines3 | 
	[
  "stable-baselines3",
  "LunarLander-v2",
  "deep-reinforcement-learning",
  "reinforcement-learning",
  "model-index",
  "region:us"
] | 
	reinforcement-learning | 2024-02-13T08:25:21Z | 
	---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
  results:
  - task:
      type: reinforcement-learning
      name: reinforcement-learning
    dataset:
      name: LunarLander-v2
      type: LunarLander-v2
    metrics:
    - type: mean_reward
      value: 264.03 +/- 21.11
      name: mean_reward
      verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
 | 
| 
	Basha738/llama2-13B-supervised-ft-7-epochs-351 | 
	Basha738 | 2024-02-13T08:23:57Z | 7 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "4-bit",
  "bitsandbytes",
  "region:us"
] | 
	text-generation | 2024-02-13T08:19:20Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	paola-md/RELEXset-Predictor | 
	paola-md | 2024-02-13T07:46:38Z | 177 | 0 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "roberta",
  "text-classification",
  "generated_from_trainer",
  "license:apache-2.0",
  "autotrain_compatible",
  "endpoints_compatible",
  "region:us"
] | 
	text-classification | 2022-09-03T08:40:16Z | 
	---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: recipe-lr5e05-wd0.02-bs32
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# recipe-lr5e05-wd0.02-bs32
This model is a fine-tuned version of [paola-md/recipe-distilroberta-Is](https://huggingface.co/paola-md/recipe-distilroberta-Is) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2738
- Rmse: 0.5232
- Mse: 0.2738
- Mae: 0.4117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse   | Mse    | Mae    |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2788        | 1.0   | 1245 | 0.2766          | 0.5259 | 0.2766 | 0.4212 |
| 0.2757        | 2.0   | 2490 | 0.2777          | 0.5270 | 0.2777 | 0.4271 |
| 0.2741        | 3.0   | 3735 | 0.2745          | 0.5239 | 0.2745 | 0.4202 |
| 0.2725        | 4.0   | 4980 | 0.2760          | 0.5254 | 0.2760 | 0.4030 |
| 0.2711        | 5.0   | 6225 | 0.2752          | 0.5246 | 0.2752 | 0.4186 |
| 0.2692        | 6.0   | 7470 | 0.2738          | 0.5232 | 0.2738 | 0.4117 |
### Test results
Using the model saved for 4 epochs, we achived the following results on the test set:
```
{
    "test_mae": 0.39440343575349946,
    "test_runtime": 22.6921,
    "test_samples_per_second": 866.867,
    "test_steps_per_second": 6.787
}
```
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
 | 
| 
	kenchenxingyu/flan-large-lora-stance-human4 | 
	kenchenxingyu | 2024-02-13T07:34:32Z | 0 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "arxiv:1910.09700",
  "endpoints_compatible",
  "region:us"
] | null | 2024-02-13T07:34:27Z | 
	---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	AmilaUvaz/Amelia | 
	AmilaUvaz | 2024-02-13T07:33:01Z | 2 | 2 | 
	diffusers | 
	[
  "diffusers",
  "text-to-image",
  "stable-diffusion",
  "lora",
  "template:sd-lora",
  "base_model:stabilityai/stable-diffusion-xl-base-1.0",
  "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
  "license:creativeml-openrail-m",
  "region:us"
] | 
	text-to-image | 2024-02-08T11:44:39Z | 
	---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
    Create a portrait of a young woman with an angular jawline, brown eyes that
    hint at both strength and vulnerability, and luscious, cascading curls of
    long hair. Illuminate the depth of her gaze and the way the curls frame her
    face, adding an element of sophistication, Chelsea Gilligan woman, sitting
    on chair, smiling, long wavy hair,
  output:
    url: images/image (82).png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Chelsea Gilligan woman
license: creativeml-openrail-m
---
# Amelia
<Gallery />
## Model description 
Amelia
## Trigger words
You should use `Chelsea Gilligan woman` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/AmilaUvaz/Amelia/tree/main) them in the Files & versions tab.
 | 
| 
	iadithyan/splitter_70b | 
	iadithyan | 2024-02-13T07:32:52Z | 4 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "unsloth",
  "conversational",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-11T09:23:21Z | 
	---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
| 
	Ashish1310/zephyr-support-chatbot | 
	Ashish1310 | 2024-02-13T07:26:04Z | 0 | 0 | null | 
	[
  "tensorboard",
  "safetensors",
  "trl",
  "sft",
  "generated_from_trainer",
  "base_model:TheBloke/zephyr-7B-alpha-GPTQ",
  "base_model:finetune:TheBloke/zephyr-7B-alpha-GPTQ",
  "license:mit",
  "region:us"
] | null | 2024-02-12T19:26:56Z | 
	---
license: mit
base_model: TheBloke/zephyr-7B-alpha-GPTQ
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: zephyr-support-chatbot
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-support-chatbot
This model is a fine-tuned version of [TheBloke/zephyr-7B-alpha-GPTQ](https://huggingface.co/TheBloke/zephyr-7B-alpha-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1
 | 
| 
	robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit | 
	robinsmits | 2024-02-13T07:10:47Z | 107 | 5 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "conversational",
  "unsloth",
  "chatalpaca",
  "en",
  "dataset:robinsmits/ChatAlpaca-20K",
  "arxiv:1910.09700",
  "license:apache-2.0",
  "model-index",
  "autotrain_compatible",
  "text-generation-inference",
  "4-bit",
  "bitsandbytes",
  "region:us"
] | 
	text-generation | 2024-02-10T11:19:50Z | 
	---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- mistral
- conversational
- unsloth
- chatalpaca
datasets:
- robinsmits/ChatAlpaca-20K
inference: false
pipeline_tag: text-generation
model-index:
- name: Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 62.12
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 84.55
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 60.66
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 67.29
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 77.11
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 40.33
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit
      name: Open LLM Leaderboard
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_robinsmits__Mistral-Instruct-7B-v0.2-ChatAlpacaV2-4bit)
|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |65.34|
|AI2 Reasoning Challenge (25-Shot)|62.12|
|HellaSwag (10-Shot)              |84.55|
|MMLU (5-Shot)                    |60.66|
|TruthfulQA (0-shot)              |67.29|
|Winogrande (5-shot)              |77.11|
|GSM8k (5-shot)                   |40.33|
 | 
| 
	giprime/OOM-13B_02 | 
	giprime | 2024-02-13T07:05:20Z | 59 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "en",
  "ko",
  "license:cc-by-nc-sa-4.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-12T23:08:11Z | 
	---
license: cc-by-nc-sa-4.0
language:
- en
- ko
library_name: transformers
---
Model Architecture
OOM-13B_02 is an language model that uses an optimized transformer architecture based on Llama-2.
## Model description
Based on "beomi/llama-2-koen-13b"
## Intended uses & limitations
T.B.D.
## Training and evaluation data
T.B.D.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 24
- gradient_accumulation_steps: 1
- total_train_batch_size: 
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1 | 
| 
	hotdogs/open-uka-v1-1-7B | 
	hotdogs | 2024-02-13T06:59:07Z | 9 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "mistral",
  "text-generation",
  "en",
  "th",
  "license:other",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T04:13:30Z | 
	---
license: other
language:
- en
- th
---
 | 
| 
	varun-v-rao/opt-1.3b-squad-model2 | 
	varun-v-rao | 2024-02-13T06:50:20Z | 87 | 0 | 
	transformers | 
	[
  "transformers",
  "tensorboard",
  "safetensors",
  "opt",
  "question-answering",
  "generated_from_trainer",
  "dataset:varun-v-rao/squad",
  "base_model:facebook/opt-1.3b",
  "base_model:finetune:facebook/opt-1.3b",
  "license:other",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	question-answering | 2024-02-12T22:36:09Z | 
	---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
datasets:
- varun-v-rao/squad
model-index:
- name: opt-1.3b-squad-model2
  results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b-squad-model2
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 31
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
 | 
| 
	edumunozsala/TinyLlama-1431k-python-coder | 
	edumunozsala | 2024-02-13T06:44:26Z | 125 | 1 | 
	transformers | 
	[
  "transformers",
  "pytorch",
  "llama",
  "text-generation",
  "axolot",
  "code",
  "coding",
  "Tinyllama",
  "dataset:iamtarun/python_code_instructions_18k_alpaca",
  "license:apache-2.0",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-12T17:48:36Z | 
	---
tags:
- axolot
- code
- coding
- Tinyllama
- axolot
model-index:
- name: TinyLlama-1431k-python-coder
  results: []
license: apache-2.0
language:
- code
datasets:
- iamtarun/python_code_instructions_18k_alpaca
pipeline_tag: text-generation
---
# TinyLlaMa 1.1B 1431k 4-bit Python Coder 👩💻 
**TinyLlaMa 1.1B** fine-tuned on the **python_code_instructions_18k_alpaca Code instructions dataset** by using the **Axolot** library in 4-bit with [PEFT](https://github.com/huggingface/peft) library.
## Pretrained description
[TinyLlama-1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T)
The [TinyLlama project](https://github.com/jzhang38/TinyLlama) aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, they can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀.
They adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
## Training data
[python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.
### Training hyperparameters
The following `axolot` configuration was used during training:
- load_in_8bit: false
- load_in_4bit: true
- strict: false
- datasets:
    - path: iamtarun/python_code_instructions_18k_alpaca
      type: alpaca
- dataset_prepared_path:
- val_set_size: 0.05
- output_dir: ./qlora-out
- adapter: qlora
- sequence_len: 1096
- sample_packing: true
- pad_to_sequence_len: true
- lora_r: 32
- lora_alpha: 16
- lora_dropout: 0.05
- lora_target_modules:
- lora_target_linear: true
- lora_fan_in_fan_out:
- gradient_accumulation_steps: 1
- micro_batch_size: 1
- num_epochs: 2
- max_steps:
- optimizer: paged_adamw_32bit
- lr_scheduler: cosine
- learning_rate: 0.0002
- train_on_inputs: false
- group_by_length: false
- bf16: false
- fp16: true
- tf32: false
- gradient_checkpointing: true
- logging_steps: 10
- flash_attention: false
- warmup_steps: 10
- weight_decay: 0.0
### Framework versions
- torch=="2.1.2"
- flash-attn=="2.5.0"
- deepspeed=="0.13.1"
- axolotl=="0.4.0"
### Example of usage
```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "edumunozsala/TinyLlama-1431k-python-coder"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, torch_dtype=torch.float16, 
                                             device_map="auto")
instruction="Write a Python function to display the first and last elements of a list."
input=""
prompt = f"""### Instruction:
Use the Task below and the Input given to write the Response, which is a programming code that can solve the Task.
### Task:
{instruction}
### Input:
{input}
### Response:
"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=100, do_sample=True, top_p=0.9,temperature=0.3)
print(f"Prompt:\n{prompt}\n")
print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")
```
### Citation
```
@misc {edumunozsala_2023,
	author       = { {Eduardo Muñoz} },
	title        = { TinyLlama-1431k-python-coder },
	year         = 2024,
	url          = { https://huggingface.co/edumunozsala/TinyLlama-1431k-python-coder },
	publisher    = { Hugging Face }
}
``` | 
| 
	ybelkada/test-tiny-llama-unsloth | 
	ybelkada | 2024-02-13T06:40:56Z | 181 | 0 | 
	transformers | 
	[
  "transformers",
  "safetensors",
  "llama",
  "text-generation",
  "unsloth",
  "arxiv:1910.09700",
  "autotrain_compatible",
  "text-generation-inference",
  "endpoints_compatible",
  "region:us"
] | 
	text-generation | 2024-02-13T06:40:55Z | 
	---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure 
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
 | 
			Subsets and Splits
				
	
				
			
				
Filtered Qwen2.5 Distill Models
												Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
													
Filtered Model Cards Count
												Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
													
Filtered Distill Qwen 7B Models
												Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
													
Filtered Qwen-7b Model Cards
												The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
													
Filtered Qwen 7B Model Cards
												The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
													
Qwen 7B Distilled Models
												The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
													
Qwen 7B Distilled Model Cards
												The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
													
Qwen 7B Distilled Models
												Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
													
Distilled Qwen 7B Models
												The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
													
Filtered Model Cards with Distill Qwen2.
												Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
													
Filtered Model Cards with Distill Qwen 7
												The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
													
Distill Qwen 7B Model Cards
												The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.
													
