modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
DBangshu/Base_gemma_9_0
|
DBangshu
| 2024-06-24T03:40:50Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T03:38:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ignaciovillanueva/umt5-base-finetuned-model
|
ignaciovillanueva
| 2024-06-24T03:30:33Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"umt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/umt5-base",
"base_model:finetune:google/umt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-24T03:28:34Z |
---
license: apache-2.0
base_model: google/umt5-base
tags:
- generated_from_trainer
model-index:
- name: umt5-base-finetuned-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# umt5-base-finetuned-model
This model is a fine-tuned version of [google/umt5-base](https://huggingface.co/google/umt5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_11e6_base_x4
|
damgomz
| 2024-06-24T03:26:28Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:48:04Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 62042.28346157074 |
| Emissions (Co2eq in kg) | 0.0375427490804359 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7324422083359654 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0646268364918729 |
| Consumed energy (kWh) | 0.7970690448278377 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11943139566352368 |
| Emissions (Co2eq in kg) | 0.02429989435578187 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_11e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.699534 | 0.263259 |
| 1 | 0.327131 | 0.254763 | 0.894355 |
| 2 | 0.193591 | 0.231249 | 0.916772 |
| 3 | 0.146482 | 0.230188 | 0.927157 |
| 4 | 0.100900 | 0.251480 | 0.918744 |
| 5 | 0.059710 | 0.303986 | 0.918022 |
| 6 | 0.040972 | 0.381227 | 0.896910 |
|
damgomz/ft_32_13e6_base_x1
|
damgomz
| 2024-06-24T03:25:20Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:49:33Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 61969.37371945381 |
| Emissions (Co2eq in kg) | 0.0374986335659504 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7315815464261486 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0645508831602831 |
| Consumed energy (kWh) | 0.7961324295864309 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1192910444099486 |
| Emissions (Co2eq in kg) | 0.02427133804011941 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_13e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.747683 | 0.372014 |
| 1 | 0.322004 | 0.252306 | 0.935384 |
| 2 | 0.190558 | 0.236110 | 0.930112 |
| 3 | 0.137793 | 0.221180 | 0.929360 |
| 4 | 0.095192 | 0.237832 | 0.930265 |
| 5 | 0.064447 | 0.275405 | 0.930897 |
| 6 | 0.042057 | 0.294993 | 0.913735 |
|
damgomz/ft_32_18e6_base_x4
|
damgomz
| 2024-06-24T03:21:59Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:42:45Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 61774.44494199753 |
| Emissions (Co2eq in kg) | 0.0373806765880717 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7292802593949762 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0643478289273878 |
| Consumed energy (kWh) | 0.7936280883223655 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11891580651334524 |
| Emissions (Co2eq in kg) | 0.0241949909356157 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_18e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.8e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.724677 | 0.306478 |
| 1 | 0.325122 | 0.243701 | 0.896817 |
| 2 | 0.193125 | 0.240784 | 0.896417 |
| 3 | 0.139279 | 0.282203 | 0.912782 |
| 4 | 0.098899 | 0.261478 | 0.927773 |
| 5 | 0.063704 | 0.340761 | 0.911821 |
| 6 | 0.037984 | 0.413277 | 0.918257 |
|
hui168/ppo-SpaceInvadersNoFrameskip-v4
|
hui168
| 2024-06-24T03:19:48Z | 33 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-06-08T02:45:27Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 2887.50 +/- 99.26
name: mean_reward
verified: false
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga hui168 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga hui168 -f logs/
python -m rl_zoo3.enjoy --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga hui168
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 0.001),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 0.0001),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 100000),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tsavage68/Summary_L3_50steps_1e6rate_05beta_CSFTDPO
|
tsavage68
| 2024-06-24T03:18:13Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T03:11:50Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_50steps_1e6rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_50steps_1e6rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5962
- Rewards/chosen: 0.0976
- Rewards/rejected: -1.3577
- Rewards/accuracies: 0.1400
- Rewards/margins: 1.4553
- Logps/rejected: -17.9791
- Logps/chosen: -9.1876
- Logits/rejected: -1.0985
- Logits/chosen: -1.1002
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.555 | 0.2004 | 50 | 0.5962 | 0.0976 | -1.3577 | 0.1400 | 1.4553 | -17.9791 | -9.1876 | -1.0985 | -1.1002 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_13e6_base_x8
|
damgomz
| 2024-06-24T03:01:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:57:18Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 60530.468687057495 |
| Emissions (Co2eq in kg) | 0.0366279330673225 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7145945333186138 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0630520785860717 |
| Consumed energy (kWh) | 0.7776466119046849 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11652115222258569 |
| Emissions (Co2eq in kg) | 0.023707766902430854 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_13e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.703413 | 0.599719 |
| 1 | 0.343454 | 0.234083 | 0.918774 |
| 2 | 0.209892 | 0.227700 | 0.915555 |
| 3 | 0.160849 | 0.231055 | 0.924774 |
| 4 | 0.126931 | 0.250159 | 0.911939 |
| 5 | 0.087303 | 0.325753 | 0.916357 |
| 6 | 0.058676 | 0.369368 | 0.909896 |
|
ljnlonoljpiljm/paligemma_docci_caption_enriched_v1-5
|
ljnlonoljpiljm
| 2024-06-24T02:37:26Z | 12 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-23T15:35:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_32_13e6_base_x4
|
damgomz
| 2024-06-24T02:26:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:36:34Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 58472.9090526104 |
| Emissions (Co2eq in kg) | 0.0353828713730564 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6903039364420729 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0609088058151305 |
| Consumed energy (kWh) | 0.7512127422572015 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.112560349926275 |
| Emissions (Co2eq in kg) | 0.02290188937893907 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_13e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.3e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.769925 | 0.338207 |
| 1 | 0.318355 | 0.234563 | 0.926135 |
| 2 | 0.195246 | 0.224179 | 0.909345 |
| 3 | 0.144236 | 0.233321 | 0.929572 |
| 4 | 0.095344 | 0.285724 | 0.918945 |
| 5 | 0.062180 | 0.348233 | 0.910088 |
| 6 | 0.041635 | 0.362033 | 0.907800 |
|
nuprl/MultiPL-T-StarCoder2_15B
|
nuprl
| 2024-06-24T02:25:37Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"starcoder2",
"text-generation",
"dataset:nuprl/MultiPL-T",
"arxiv:2308.09895",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-26T09:53:46Z |
---
license: openrail
library_name: transformers
datasets:
- nuprl/MultiPL-T
---
# MultiPL-T StarCoder2-15b
This repository holds several [StarCoder2-15b](https://huggingface.co/bigcode/starcoder2-15b) fine-tunes, all fine-tuned on MultiPL-T data.
Examine the commit message to determine the language and checkpoint. We have a checkpoint
for each epoch.
For more information the training process, see the MultiPL-T paper:
```
@misc{cassano:multipl-t,
title={Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs},
author={Federico Cassano and John Gouwar and Francesca Lucchetti and Claire Schlesinger and Anders Freeman and Carolyn Jane Anderson and Molly Q Feldman and Michael Greenberg and Abhinav Jangda and Arjun Guha},
year={2024},
eprint={2308.09895},
archivePrefix={arXiv},
primaryClass={cs.PL}
}
```
For usage instructions, see the model card for the original model. Replace the model name with the name of this repository, and set `revision=COMMIT_HASH`.
|
luishcarvalho/llama_smart_contract_4
|
luishcarvalho
| 2024-06-24T02:24:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T02:22:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Aura_Revived_Base
|
jeiku
| 2024-06-24T02:24:30Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Undi95/Llama3-Unholy-8B-OAS",
"base_model:merge:Undi95/Llama3-Unholy-8B-OAS",
"base_model:failspy/Llama-3-8B-Instruct-MopeyMule",
"base_model:merge:failspy/Llama-3-8B-Instruct-MopeyMule",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T02:15:47Z |
---
base_model:
- Undi95/Llama3-Unholy-8B-OAS
- failspy/Llama-3-8B-Instruct-MopeyMule
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS)
* [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Undi95/Llama3-Unholy-8B-OAS
layer_range: [0,32]
- model: failspy/Llama-3-8B-Instruct-MopeyMule
layer_range: [0,32]
merge_method: slerp
base_model: Undi95/Llama3-Unholy-8B-OAS
parameters:
t:
- filter: self_attn
value: [0, 0.3, 0.5, 0.7, 1]
- filter: mlp
value: [1, 0.7, 0.5, 0.3, 0]
- value: 0.25
dtype: bfloat16
```
|
damgomz/ft_32_9e6_base_x4
|
damgomz
| 2024-06-24T02:14:45Z | 12 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:56:43Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 57740.51975607872 |
| Emissions (Co2eq in kg) | 0.0349396951409503 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.681657758170532 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0601459220262864 |
| Consumed energy (kWh) | 0.7418036801968183 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.11115050053045154 |
| Emissions (Co2eq in kg) | 0.022615036904464165 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_9e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 9e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.718265 | 0.417621 |
| 1 | 0.329715 | 0.237797 | 0.920274 |
| 2 | 0.203771 | 0.226090 | 0.916015 |
| 3 | 0.154634 | 0.241886 | 0.917662 |
| 4 | 0.105754 | 0.272587 | 0.909722 |
| 5 | 0.069278 | 0.333512 | 0.902355 |
| 6 | 0.047279 | 0.374489 | 0.887363 |
|
jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF
|
jfer1015
| 2024-06-24T02:10:04Z | 23 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:mistralai/Mistral-7B-Instruct-v0.3",
"base_model:quantized:mistralai/Mistral-7B-Instruct-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-24T02:09:41Z |
---
base_model: mistralai/Mistral-7B-Instruct-v0.3
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-7B-Instruct-v0.3`](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo jfer1015/Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF --hf-file mistral-7b-instruct-v0.3-q4_k_m.gguf -c 2048
```
|
John6666/hoseki-lustrousmix-pony-v1-sdxl
|
John6666
| 2024-06-24T02:02:08Z | 1,278 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-24T01:55:56Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/534425/hoseki-lustrousmix-pony-xl?modelVersionId=594029).
|
bigstorm/firefunction-v2-6.0bpw-8hb-exl2
|
bigstorm
| 2024-06-24T01:58:20Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"function-calling",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-24T01:38:03Z |
---
license: llama3
tags:
- function-calling
---
# BigStorm - ExLLamaV2 (Exl2) Quantization
- 6.0 bpw target
- 8 head bits
Enjoy! Raise an issue if you'd like other BPW levels.
#### Base Model Card Follows:
---
# FireFunction V2: Fireworks Function Calling Model
[**Try on Fireworks**](https://fireworks.ai/models/fireworks/firefunction-v2) | [**API Docs**](https://readme.fireworks.ai/docs/function-calling) | [**Demo App**](https://functional-chat.vercel.app/) | [**Discord**](https://discord.gg/mMqQxvFD9A)
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b6f3a72f5a966b9722de88/nJNtxLzWswBDKK1iOZblb.png" alt="firefunction" width="400"/>
FireFunction is a state-of-the-art function calling model with a commercially viable license. View detailed info in our [announcement blog](https://fireworks.ai/blog/firefunction-v2-launch-post). Key info and highlights:
**Comparison with other models:**
- Competitive with GPT-4o at function-calling, scoring 0.81 vs 0.80 on a medley of public evaluations
- Trained on Llama 3 and retains Llama 3’s conversation and instruction-following capabilities, scoring 0.84 vs Llama 3’s 0.89 on MT bench
- Significant quality improvements over FireFunction v1 across the broad range of metrics
**General info:**
🐾 Successor of the [FireFunction](https://fireworks.ai/models/fireworks/firefunction-v1) model
🔆 Support of parallel function calling (unlike FireFunction v1) and good instruction following
💡 Hosted on the [Fireworks](https://fireworks.ai/models/fireworks/firefunction-v2) platform at < 10% of the cost of GPT 4o and 2x the speed
## Intended Use and Limitations
### Supported usecases
The model was tuned to perfom well on a range of usecases including:
* general instruction following
* multi-turn chat mixing vanilla messages with function calls
* single- and parallel function calling
* up to 20 function specs supported at once
* structured information extraction
The model has an 8k context window, like Llama 3
### Out-of-Scope Use
The model was not optimized for the following use cases:
* 100+ function specs
* nested function calling
## Metrics
| Benchmark | Firefunction v1 | Firefunction v2 | Llama 3 70b Instruct | Gpt-4o |
|:-----------------------------------|:----------------|:----------------|:---------------------|:-------|
| Gorilla simple | 0.91 | 0.94 | 0.925 | 0.88 |
| Gorilla multiple_function | 0.92 | 0.91 | 0.86 | 0.91 |
| Gorilla parallel_function | 0 | 0.9 | 0.86 | 0.89 |
| Gorilla parallel_multiple_function | 0 | 0.8 | 0.615 | 0.72 |
| Nexus parallel | 0.38 | 0.53 | 0.3 | 0.47 |
| Mtbench | 0.73 | 0.84 | 0.89 | 0.93 |
| Average | 0.49 | 0.82 | 0.74 | 0.8 |
## Example Usage
See [documentation](https://readme.fireworks.ai/docs/function-calling) for more detail.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import json
from datetime import datetime
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("fireworks-ai/firefunction-v2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("fireworks-ai/firefunction-v2")
function_spec = [
{
"name": "get_stock_price",
"description": "Get the current stock price",
"parameters": {
"type": "object",
"properties": {
"symbol": {
"type": "string",
"description": "The stock symbol, e.g. AAPL, GOOG"
}
},
"required": [
"symbol"
]
}
},
{
"name": "check_word_anagram",
"description": "Check if two words are anagrams of each other",
"parameters": {
"type": "object",
"properties": {
"word1": {
"type": "string",
"description": "The first word"
},
"word2": {
"type": "string",
"description": "The second word"
}
},
"required": [
"word1",
"word2"
]
}
}
]
functions = json.dumps(function_spec, indent=4)
messages = [
{'role': 'system', 'content': 'You are a helpful assistant with access to functions. Use them if required.'},
{'role': 'user', 'content': 'Hi, can you tell me the current stock price of google and netflix?'}
]
now = datetime.now().strftime('%Y-%m-%d %H:%M:%S')
model_inputs = tokenizer.apply_chat_template(messages, functions=functions, datetime=now, return_tensors="pt").to(model.device)
generated_ids = model.generate(model_inputs, max_new_tokens=128)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Resources
* [Fireworks discord with function calling channel](https://discord.gg/mMqQxvFD9A)
* [Documentation](https://readme.fireworks.ai/docs/function-calling)
* [Demo app](https://functional-chat.vercel.app/)
* [Try in Fireworks prompt playground UI](https://fireworks.ai/models/fireworks/firefunction-v2)
|
israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF
|
israellaguan
| 2024-06-24T01:47:10Z | 6 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cerebras/SlimPajama-627B",
"base_model:TinyLlama/TinyLlama_v1.1_math_code",
"base_model:quantized:TinyLlama/TinyLlama_v1.1_math_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-24T01:47:05Z |
---
base_model: TinyLlama/TinyLlama_v1.1_math_code
datasets:
- cerebras/SlimPajama-627B
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF
This model was converted to GGUF format from [`TinyLlama/TinyLlama_v1.1_math_code`](https://huggingface.co/TinyLlama/TinyLlama_v1.1_math_code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama_v1.1_math_code) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF --hf-file tinyllama_v1.1_math_code-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF --hf-file tinyllama_v1.1_math_code-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF --hf-file tinyllama_v1.1_math_code-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q6_K-GGUF --hf-file tinyllama_v1.1_math_code-q6_k.gguf -c 2048
```
|
saraprice/OpenHermes-13B-COT-headlines-2017-2019-balanced
|
saraprice
| 2024-06-24T01:43:52Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:sprice12345/OpenHermes-headlines-2017-19-balanced",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-12T17:59:13Z |
---
license: apache-2.0
datasets:
- sprice12345/OpenHermes-headlines-2017-19-balanced
language:
- en
---
|
nuprl/MultiPL-T-DeepSeekCoder_33b
|
nuprl
| 2024-06-24T01:42:04Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:nuprl/MultiPL-T",
"arxiv:2308.09895",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-01T16:51:28Z |
---
library_name: transformers
license: openrail
datasets:
- nuprl/MultiPL-T
---
# MultiPL-T DeepSeekCoder-33b-Base
This repository holds a [DeepSeekCoder-33b-base](https://huggingface.co/deepseek-ai/deepseek-coder-33b-base) fine-tune
on MultiPL-T Racket.
Examine the commit message to determine the language and checkpoint. We have a checkpoint
for each epoch.
For more information the training process, see the MultiPL-T paper:
```
@misc{cassano:multipl-t,
title={Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs},
author={Federico Cassano and John Gouwar and Francesca Lucchetti and Claire Schlesinger and Anders Freeman and Carolyn Jane Anderson and Molly Q Feldman and Michael Greenberg and Abhinav Jangda and Arjun Guha},
year={2024},
eprint={2308.09895},
archivePrefix={arXiv},
primaryClass={cs.PL}
}
```
For usage instructions, see the model card for the original model. Replace the model name with the name of this repository, and set `revision=COMMIT_HASH`.
|
MrezaPRZ/codestral_experts_ties
|
MrezaPRZ
| 2024-06-24T01:41:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2306.01708",
"base_model:MrezaPRZ/codestral_high_quality_sft",
"base_model:merge:MrezaPRZ/codestral_high_quality_sft",
"base_model:MrezaPRZ/codestral_high_quality_sft_bigquery",
"base_model:merge:MrezaPRZ/codestral_high_quality_sft_bigquery",
"base_model:MrezaPRZ/codestral_high_quality_sft_postgres",
"base_model:merge:MrezaPRZ/codestral_high_quality_sft_postgres",
"base_model:mistralai/Codestral-22B-v0.1",
"base_model:merge:mistralai/Codestral-22B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:30:34Z |
---
base_model:
- MrezaPRZ/codestral_high_quality_sft_bigquery
- MrezaPRZ/codestral_high_quality_sft
- MrezaPRZ/codestral_high_quality_sft_postgres
- mistralai/Codestral-22B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# ties
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1) as a base.
### Models Merged
The following models were included in the merge:
* [MrezaPRZ/codestral_high_quality_sft_bigquery](https://huggingface.co/MrezaPRZ/codestral_high_quality_sft_bigquery)
* [MrezaPRZ/codestral_high_quality_sft](https://huggingface.co/MrezaPRZ/codestral_high_quality_sft)
* [MrezaPRZ/codestral_high_quality_sft_postgres](https://huggingface.co/MrezaPRZ/codestral_high_quality_sft_postgres)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MrezaPRZ/codestral_high_quality_sft
parameters:
density: 0.5
weight: 0.33
- model: MrezaPRZ/codestral_high_quality_sft_bigquery
parameters:
density: 0.5
weight: 0.33
- model: MrezaPRZ/codestral_high_quality_sft_postgres
parameters:
density: 0.5
weight: 0.33
merge_method: ties
base_model: mistralai/Codestral-22B-v0.1
parameters:
normalize: true
dtype: bfloat16
```
|
PhillipGuo/hp-lat-llama-PCA-epsilon0.5-pgd_layer16-def_layer17_18_19-wikitext-fullrank-77
|
PhillipGuo
| 2024-06-24T01:41:29Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:23:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ANGKJ1995/distilbert-base-uncased-mbib-4096v2
|
ANGKJ1995
| 2024-06-24T01:40:18Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-24T01:39:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon1.5-pgd_layer16-def_layer17_18_19-wikitext-fullrank-77
|
PhillipGuo
| 2024-06-24T01:37:57Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:23:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer12-def_layer13_14_15-wikitext-fullrank-78
|
PhillipGuo
| 2024-06-24T01:34:42Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:28:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon3.0-pgd_layer12-def_layer13_14_15-wikitext-fullrank-78
|
PhillipGuo
| 2024-06-24T01:34:38Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:28:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon6.0-pgd_layer16-def_layer17_18_19-wikitext-fullrank-77
|
PhillipGuo
| 2024-06-24T01:33:55Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:22:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon1.5-pgd_layer12-def_layer13_14_15-wikitext-fullrank-78
|
PhillipGuo
| 2024-06-24T01:30:57Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:28:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhillipGuo/hp-lat-llama-PCA-epsilon0.5-pgd_layer12-def_layer13_14_15-wikitext-fullrank-78
|
PhillipGuo
| 2024-06-24T01:30:06Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:27:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ramikan-BR/TiamaPY-v34
|
Ramikan-BR
| 2024-06-24T01:24:22Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/tinyllama-chat-bnb-4bit",
"base_model:quantized:unsloth/tinyllama-chat-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T23:25:39Z |
---
base_model: unsloth/tinyllama-chat-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
``` Python
# Question
if False:
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "lora_model", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = You MUST copy from above!
inputs = tokenizer(
[
alpaca_prompt.format(
"Eu preciso treinar uma IA gpt2 do hugginface em meu computador", # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 4096)
# Answer
<s> Below is an instruction that describes a task related to Python programming or computer science concepts. Provide a detailed, accurate, and helpful response that addresses the query or solves the problem presented. Follow these guidelines:
1. If the task involves coding, provide a complete, working Python solution with explanations.
2. If it's a conceptual question, explain thoroughly, using examples where appropriate.
3. Break down complex problems or concepts into steps or components.
4. If you're unsure about any part of your response, clearly state your level of confidence and any assumptions you're making.
5. If the query is ambiguous or lacks necessary information, ask for clarification before proceeding.
6. After providing your response, briefly summarize the key points or solution.
Remember, accuracy and helpfulness are crucial. Avoid making unfounded claims or providing incorrect information.
### Input:
Eu preciso treinar uma IA gpt2 do hugginface em meu computador
### Output:
# Importações necessárias
from transformers import GPT2LMHeadModel, GPT2Tokenizer
from transformers import TrainingArguments, Trainer
from transformers.data import DataCollatorForLanguageModeling
from transformers.data.processors import DataProcessor
# Configurações
training_args = TrainingArguments(
output_dir='./resultados',
evaluation_strategy='epoch',
learning_rate=2e-5,
per_device_train_batch_size=8,
eval_steps=1000,
num_warmupSteps=500,
weighted_metric='accuracy',
)
# Carregamento do tokenizador
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
# Carregamento do modelo
modelo = GPT2LMHeadModel.from_pretrained('gpt2')
# Configurações do tracker
tracker = DataProcessor(
input_class=DataProcessor,
batch_size=1,
max_length=100,
padding=True,
truncation=True,
)
# Treinando o modelo
trainer = Trainer(
args=training_args,
train_loop=tracker,
trainable_pipelines=[],
)
# Treinando o modelo
trainer.train()
# Evaluação do modelo
evaluador = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
model=modelo,
padding=True,
max_length=100,
batch_size=8,
)
# Evalua o modelo
resultados = trainer.evaluate()
# Imprimir os resultados
for name, loss, acc in resultados:
print(f'{name}: {loss}, {acc:.2f}%')</s>
```
# Uploaded model
- **Developed by:** Ramikan-BR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama-chat-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
damgomz/ft_32_10e6_base_x1
|
damgomz
| 2024-06-24T01:20:37Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:42:14Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 54492.29978084564 |
| Emissions (Co2eq in kg) | 0.0329741509828358 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.6433108421163447 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0567624256027241 |
| Consumed energy (kWh) | 0.7000732677190684 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.10489767707812786 |
| Emissions (Co2eq in kg) | 0.021342817414164543 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_10e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.734880 | 0.591007 |
| 1 | 0.339730 | 0.227890 | 0.925695 |
| 2 | 0.193068 | 0.229836 | 0.933185 |
| 3 | 0.142397 | 0.214513 | 0.917664 |
| 4 | 0.092663 | 0.246131 | 0.917786 |
| 5 | 0.060364 | 0.298660 | 0.887692 |
| 6 | 0.040224 | 0.363768 | 0.884661 |
|
nuprl/MultiPL-T-CodeLlama_70b
|
nuprl
| 2024-06-24T01:13:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:nuprl/MultiPL-T",
"arxiv:2308.09895",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-11T15:05:44Z |
---
library_name: transformers
license: openrail
datasets:
- nuprl/MultiPL-T
---
# MultiPL-T CodeLlama-70b
This repository holds several [CodeLlama-13b](https://huggingface.co/meta-llama/CodeLlama-70b-hf) fine-tunes, all fine-tuned on MultiPL-T data.
Examine the commit message to determine the language. Due to disk space constraints, we were not able to save all checkpoints. Instead, we have
uploaded the best checkpoint for each language.
For more information the training process, see the MultiPL-T paper:
```
@misc{cassano:multipl-t,
title={Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs},
author={Federico Cassano and John Gouwar and Francesca Lucchetti and Claire Schlesinger and Anders Freeman and Carolyn Jane Anderson and Molly Q Feldman and Michael Greenberg and Abhinav Jangda and Arjun Guha},
year={2024},
eprint={2308.09895},
archivePrefix={arXiv},
primaryClass={cs.PL}
}
```
For usage instructions, see the model card for the original model. Replace the model name with the name of this repository, and set revision=COMMIT_HASH.
|
gate369/Bitnet-Mistral-3-and4-slerp
|
gate369
| 2024-06-24T01:07:43Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"liminerity/Bitnet-Mistral.0.2-v4",
"liminerity/Bitnet-Mistral.0.2-v3",
"base_model:liminerity/Bitnet-Mistral.0.2-v3",
"base_model:merge:liminerity/Bitnet-Mistral.0.2-v3",
"base_model:liminerity/Bitnet-Mistral.0.2-v4",
"base_model:merge:liminerity/Bitnet-Mistral.0.2-v4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:07:39Z |
---
base_model:
- liminerity/Bitnet-Mistral.0.2-v4
- liminerity/Bitnet-Mistral.0.2-v3
tags:
- merge
- mergekit
- lazymergekit
- liminerity/Bitnet-Mistral.0.2-v4
- liminerity/Bitnet-Mistral.0.2-v3
---
# Bitnet-Mistral-2-and4-slerp
Bitnet-Mistral-2-and4-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [liminerity/Bitnet-Mistral.0.2-v4](https://huggingface.co/liminerity/Bitnet-Mistral.0.2-v4)
* [liminerity/Bitnet-Mistral.0.2-v3](https://huggingface.co/liminerity/Bitnet-Mistral.0.2-v3)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: liminerity/Bitnet-Mistral.0.2-v4
layer_range: [0, 24]
- model: liminerity/Bitnet-Mistral.0.2-v3
layer_range: [0, 24]
merge_method: slerp
base_model: liminerity/Bitnet-Mistral.0.2-v4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gate369/Bitnet-Mistral-2-and4-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
mostafasmart/vit-base-patch16-224-7class224
|
mostafasmart
| 2024-06-24T01:04:37Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-24T01:03:54Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_keras_callback
model-index:
- name: vit-base-patch16-224-7class224
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-7class224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0078
- Train Accuracy: 0.9540
- Train Top-3-accuracy: 0.9960
- Validation Loss: 0.1065
- Validation Accuracy: 0.9569
- Validation Top-3-accuracy: 0.9963
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 630, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.6954 | 0.6170 | 0.9295 | 0.3090 | 0.7653 | 0.9734 | 0 |
| 0.1603 | 0.8272 | 0.9819 | 0.1722 | 0.8640 | 0.9865 | 1 |
| 0.0448 | 0.8890 | 0.9892 | 0.1220 | 0.9071 | 0.9912 | 2 |
| 0.0201 | 0.9192 | 0.9924 | 0.1171 | 0.9289 | 0.9934 | 3 |
| 0.0132 | 0.9359 | 0.9942 | 0.1132 | 0.9416 | 0.9948 | 4 |
| 0.0089 | 0.9466 | 0.9952 | 0.1095 | 0.9506 | 0.9957 | 5 |
| 0.0078 | 0.9540 | 0.9960 | 0.1065 | 0.9569 | 0.9963 | 6 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
1231czx/2b_rs_merged_math_and_gsm8k_2epoch
|
1231czx
| 2024-06-24T01:04:00Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T01:01:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tsudiacr/bert-base-uncased
|
tsudiacr
| 2024-06-24T01:03:54Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-06-19T23:34:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nuprl/MultiPL-T-CodeLlama_34b
|
nuprl
| 2024-06-24T01:02:26Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:nuprl/MultiPL-T",
"arxiv:2308.09895",
"license:openrail",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-08-31T21:01:11Z |
---
library_name: transformers
license: openrail
datasets:
- nuprl/MultiPL-T
---
# MultiPL-T CodeLlama-34b
This repository holds several [CodeLlama-34b](https://huggingface.co/meta-llama/CodeLlama-34b-hf) fine-tunes, all fine-tuned on MultiPL-T data.
Examine the commit message to determine the language and checkpoint. We have a checkpoint
for each epoch.
For more information the training process, see the MultiPL-T paper:
```
@misc{cassano:multipl-t,
title={Knowledge Transfer from High-Resource to Low-Resource Programming Languages for Code LLMs},
author={Federico Cassano and John Gouwar and Francesca Lucchetti and Claire Schlesinger and Anders Freeman and Carolyn Jane Anderson and Molly Q Feldman and Michael Greenberg and Abhinav Jangda and Arjun Guha},
year={2024},
eprint={2308.09895},
archivePrefix={arXiv},
primaryClass={cs.PL}
}
```
For usage instructions, see the model card for the original model. Replace the model name with the name of this repository, and set `revision=COMMIT_HASH`.
|
israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF
|
israellaguan
| 2024-06-24T00:57:12Z | 10 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cerebras/SlimPajama-627B",
"base_model:TinyLlama/TinyLlama_v1.1_math_code",
"base_model:quantized:TinyLlama/TinyLlama_v1.1_math_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-24T00:57:02Z |
---
base_model: TinyLlama/TinyLlama_v1.1_math_code
datasets:
- cerebras/SlimPajama-627B
language:
- en
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF
This model was converted to GGUF format from [`TinyLlama/TinyLlama_v1.1_math_code`](https://huggingface.co/TinyLlama/TinyLlama_v1.1_math_code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama_v1.1_math_code) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF --hf-file tinyllama_v1.1_math_code-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF --hf-file tinyllama_v1.1_math_code-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF --hf-file tinyllama_v1.1_math_code-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo israellaguan/TinyLlama_v1.1_math_code-Q8_0-GGUF --hf-file tinyllama_v1.1_math_code-q8_0.gguf -c 2048
```
|
DBangshu/Base_gemma_7_0
|
DBangshu
| 2024-06-24T00:47:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T00:45:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wgcv/platzi-distilroberta-base-mrpc-wgcv
|
wgcv
| 2024-06-24T00:36:44Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T17:41:55Z |
---
license: apache-2.0
base_model: distilroberta-base
tags:
- text-classification
- generated_from_trainer
metrics:
- accuracy
- f1
widget:
- text: "Around 0335 GMT , Tab shares were up 19 cents , or 4.4 % , at A $ 4.56 , having earlier set a record high of A $ 4.57 ., Tab shares jumped 20 cents , or 4.6 % , to set a record closing high at A $ 4.57 ."
example_title: "not_equivalent"
- text: "The stock rose $ 2.11 , or about 11 percent , to close Friday at $ 21.51 on the New York Stock Exchange ., PG & E Corp. shares jumped $ 1.63 or 8 percent to $ 21.03 on the New York Stock Exchange on Friday ."
example_title: "equivalent"
model-index:
- name: platzi-distilroberta-base-mrpc-wgcv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-wgcv
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4002
- Accuracy: 0.8456
- F1: 0.8835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.409 | 2.1739 | 500 | 0.4002 | 0.8456 | 0.8835 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
sulaimank/wav2vec2-large-xls-r-300m-Sulaimank
|
sulaimank
| 2024-06-24T00:34:00Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-23T21:40:57Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-Sulaimank
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-Sulaimank
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1017
- Wer: 0.1013
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 3.0012 | 10.8108 | 400 | 0.3243 | 0.4548 |
| 0.1975 | 21.6216 | 800 | 0.1885 | 0.2591 |
| 0.0751 | 32.4324 | 1200 | 0.1590 | 0.1902 |
| 0.0398 | 43.2432 | 1600 | 0.1300 | 0.1520 |
| 0.0221 | 54.0541 | 2000 | 0.1195 | 0.1361 |
| 0.0137 | 64.8649 | 2400 | 0.1034 | 0.1112 |
| 0.0088 | 75.6757 | 2800 | 0.1017 | 0.1013 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.0+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
|
mostafasmart/vit-base-patch16-224-5class224
|
mostafasmart
| 2024-06-24T00:17:41Z | 6 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-23T23:56:33Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_keras_callback
model-index:
- name: vit-base-patch16-224-5class224
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-5class224
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0115
- Train Accuracy: 0.9460
- Train Top-3-accuracy: 0.9911
- Validation Loss: 0.1621
- Validation Accuracy: 0.9490
- Validation Top-3-accuracy: 0.9916
- Epoch: 6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 574, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.7725 | 0.6414 | 0.8898 | 0.3755 | 0.7636 | 0.9478 | 0 |
| 0.2160 | 0.8219 | 0.9635 | 0.2372 | 0.8557 | 0.9726 | 1 |
| 0.0696 | 0.8812 | 0.9780 | 0.2035 | 0.8989 | 0.9818 | 2 |
| 0.0344 | 0.9108 | 0.9842 | 0.1715 | 0.9203 | 0.9860 | 3 |
| 0.0194 | 0.9278 | 0.9875 | 0.1911 | 0.9337 | 0.9888 | 4 |
| 0.0147 | 0.9381 | 0.9897 | 0.1651 | 0.9425 | 0.9904 | 5 |
| 0.0115 | 0.9460 | 0.9911 | 0.1621 | 0.9490 | 0.9916 | 6 |
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_14e6_base_x12
|
damgomz
| 2024-06-24T00:13:06Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T11:36:12Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_14e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.759346 | 0.417754 |
| 1 | 0.372233 | 0.266370 | 0.896978 |
| 2 | 0.232219 | 0.233363 | 0.922029 |
|
tsavage68/Summary_L3_1000steps_1e6rate_05beta_CSFTDPO
|
tsavage68
| 2024-06-24T00:12:30Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-24T00:06:21Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_1000steps_1e6rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_1000steps_1e6rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5961
- Rewards/chosen: 0.1158
- Rewards/rejected: -2.7330
- Rewards/accuracies: 0.1400
- Rewards/margins: 2.8488
- Logps/rejected: -20.7298
- Logps/chosen: -9.1512
- Logits/rejected: -1.1135
- Logits/chosen: -1.1149
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.555 | 0.2004 | 50 | 0.5962 | 0.0976 | -1.3577 | 0.1400 | 1.4553 | -17.9791 | -9.1876 | -1.0985 | -1.1002 |
| 0.6585 | 0.4008 | 100 | 0.5962 | 0.1094 | -1.5231 | 0.1400 | 1.6326 | -18.3100 | -9.1639 | -1.1003 | -1.1019 |
| 0.6238 | 0.6012 | 150 | 0.5961 | 0.1341 | -2.2789 | 0.1400 | 2.4130 | -19.8216 | -9.1145 | -1.1048 | -1.1065 |
| 0.6065 | 0.8016 | 200 | 0.5961 | 0.1193 | -2.7271 | 0.1400 | 2.8464 | -20.7179 | -9.1442 | -1.1137 | -1.1150 |
| 0.6238 | 1.0020 | 250 | 0.5961 | 0.1211 | -2.7359 | 0.1400 | 2.8570 | -20.7355 | -9.1407 | -1.1133 | -1.1146 |
| 0.6238 | 1.2024 | 300 | 0.5961 | 0.1211 | -2.7359 | 0.1400 | 2.8570 | -20.7355 | -9.1407 | -1.1133 | -1.1146 |
| 0.6238 | 1.4028 | 350 | 0.5961 | 0.1226 | -2.7319 | 0.1400 | 2.8545 | -20.7275 | -9.1376 | -1.1131 | -1.1144 |
| 0.5718 | 1.6032 | 400 | 0.5961 | 0.1226 | -2.7319 | 0.1400 | 2.8545 | -20.7275 | -9.1376 | -1.1131 | -1.1144 |
| 0.5892 | 1.8036 | 450 | 0.5961 | 0.1196 | -2.7246 | 0.1400 | 2.8442 | -20.7129 | -9.1435 | -1.1135 | -1.1147 |
| 0.5718 | 2.0040 | 500 | 0.5961 | 0.1211 | -2.7256 | 0.1400 | 2.8467 | -20.7150 | -9.1406 | -1.1135 | -1.1147 |
| 0.5718 | 2.2044 | 550 | 0.5961 | 0.1207 | -2.7233 | 0.1400 | 2.8439 | -20.7103 | -9.1414 | -1.1134 | -1.1147 |
| 0.5545 | 2.4048 | 600 | 0.5961 | 0.1207 | -2.7233 | 0.1400 | 2.8439 | -20.7103 | -9.1414 | -1.1134 | -1.1147 |
| 0.5199 | 2.6052 | 650 | 0.5961 | 0.1207 | -2.7233 | 0.1400 | 2.8439 | -20.7103 | -9.1414 | -1.1134 | -1.1147 |
| 0.6238 | 2.8056 | 700 | 0.5961 | 0.1207 | -2.7233 | 0.1400 | 2.8439 | -20.7103 | -9.1414 | -1.1134 | -1.1147 |
| 0.6065 | 3.0060 | 750 | 0.5961 | 0.1181 | -2.7332 | 0.1400 | 2.8513 | -20.7302 | -9.1466 | -1.1134 | -1.1147 |
| 0.6412 | 3.2064 | 800 | 0.5961 | 0.1124 | -2.7370 | 0.1400 | 2.8494 | -20.7378 | -9.1580 | -1.1135 | -1.1148 |
| 0.6585 | 3.4068 | 850 | 0.5961 | 0.1124 | -2.7370 | 0.1400 | 2.8494 | -20.7378 | -9.1580 | -1.1135 | -1.1148 |
| 0.6238 | 3.6072 | 900 | 0.5961 | 0.1148 | -2.7352 | 0.1400 | 2.8500 | -20.7342 | -9.1532 | -1.1135 | -1.1149 |
| 0.5372 | 3.8076 | 950 | 0.5961 | 0.1148 | -2.7352 | 0.1400 | 2.8500 | -20.7342 | -9.1532 | -1.1135 | -1.1149 |
| 0.6238 | 4.0080 | 1000 | 0.5961 | 0.1158 | -2.7330 | 0.1400 | 2.8488 | -20.7298 | -9.1512 | -1.1135 | -1.1149 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
damgomz/ft_32_14e6_base_x4
|
damgomz
| 2024-06-24T00:07:51Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:41:47Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 50126.91908168793 |
| Emissions (Co2eq in kg) | 0.0303325962416147 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.591775274712012 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0522152189423639 |
| Consumed energy (kWh) | 0.6439904936543758 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.09649431923224926 |
| Emissions (Co2eq in kg) | 0.019633043306994436 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_14e6_base_x4 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.4e-05 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.706296 | 0.060120 |
| 1 | 0.324685 | 0.232908 | 0.917821 |
| 2 | 0.196300 | 0.220065 | 0.922584 |
| 3 | 0.146310 | 0.245110 | 0.891744 |
| 4 | 0.096830 | 0.267668 | 0.923489 |
| 5 | 0.059735 | 0.353932 | 0.895680 |
| 6 | 0.038564 | 0.376377 | 0.906003 |
|
paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF
|
paulo037
| 2024-06-24T00:04:25Z | 14 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch",
"base_model:quantized:NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-24T00:04:15Z |
---
base_model: NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF
This model was converted to GGUF format from [`NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch`](https://huggingface.co/NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NESPED-GEN/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF --hf-file phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF --hf-file phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF --hf-file phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo paulo037/phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-Q4_0-GGUF --hf-file phi-3-mini-128k-instruct-mix-spider-bird-1-epoch-q4_0.gguf -c 2048
```
|
M2LabOrg/whisper-small-af
|
M2LabOrg
| 2024-06-24T00:02:09Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"af",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-06-23T06:59:00Z |
---
language:
- af
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper small af - Michel Mesquita
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: af
split: None
args: 'config: af, split: test'
metrics:
- name: Wer
type: wer
value: 1.740506329113924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small af - Michel Mesquita
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Wer: 1.7405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:--------:|:----:|:---------------:|:------:|
| 0.0001 | 190.4762 | 1000 | 0.0002 | 1.8987 |
| 0.0001 | 380.9524 | 2000 | 0.0001 | 1.8987 |
| 0.0 | 571.4286 | 3000 | 0.0001 | 1.7405 |
| 0.0 | 761.9048 | 4000 | 0.0000 | 1.7405 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Kit-Lemonfoot/kitlemonfoot_rvc_models
|
Kit-Lemonfoot
| 2024-06-23T23:57:46Z | 0 | 4 | null |
[
"music",
"rvc",
"dataset:Kit-Lemonfoot/LemonfootVoiceDatasets",
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-28T18:39:19Z |
---
license: creativeml-openrail-m
tags:
- music
- rvc
datasets:
- Kit-Lemonfoot/LemonfootVoiceDatasets
---
# Kit Lemonfoot's RVC Models
This repository exists to host RVC models made by Kit Lemonfoot, also known as Noel Shirogane's High Flying Birds on YouTube.
Please credit me if you use any models in this repository in any way.
## Currently Available Models:
(For reference: SPEAKING means only speaking was used in the dataset, SINGING means only singing was used, and HYBRID means a mix of both were used.)
- AZKi [Hololive JP] (Hybrid)
- AZKi BLaCK [Hololive JP] (Singing)
- Hitomi Chris [Hololive JP] (Speaking)
- Shirogane Noel [Hololive JP] (Speaking)
- Shiranui Flare [Hololive JP] (Speaking)
- Ichijou Ririka [Hololive DEV_IS] (Speaking)
- Airani Iofifteen [Hololive ID] (Speaking)
- Pavolia Reine [Hololive ID] (Speaking)
- Tsukumo Sana [Hololive EN] (Singing)
- Cecilia Immergreen [Hololive EN] (Singing)
- Hanasaki Miyabi [Holostars JP] (Speaking)
- Kanade Izuru [Holostars JP] (Singing)
- Astel Leda [Holostars JP] (Speaking)
- Kishido Temma [Holostars JP] (Speaking)
- Kageyama Shien [Holostars JP] (Speaking)
- Aragami Oga [Holostars JP] (Speaking)
- Minase Rio [Holostars JP] (Speaking)
- Regis Altare [Holostars EN] (Speaking)
- Axel Syrios [Holostars EN] (Speaking)
- Jurard T. Rexford [Holostars EN] (Speaking)
- Octavio [Holostars EN] (Speaking)
- Crimzon Ruze [Holostars EN] (Speaking)
- Finana Ryugu [Nijisanji EN] (Hybrid)
- Mika Melatika [Nijisanji ID] (Speaking)
- Rinkou Ashelia [Phase Connect] (Speaking)
- FRE3ZER [Hypnospace Outlaw] (Singing)
- Concept Unification Installation Tape [CEC Internal] (Speaking)
- Dirtbag Private [YouTuber] (Speaking)
- Brad Taste in Music [YouTuber] (Speaking)
- Captain Astronaut [YouTuber] (Speaking)
|
lordspline/qwen-pruned-360m
|
lordspline
| 2024-06-23T23:54:11Z | 41 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T23:17:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/t-ponynai3-v55-sdxl-spo
|
John6666
| 2024-06-23T23:29:19Z | 2,137 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"SPO",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-23T23:24:24Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
- SPO
---
Original model is [here](https://civitai.com/models/317902/t-ponynai3?modelVersionId=593760).
|
John6666/0002-pony-v3-sdxl
|
John6666
| 2024-06-23T23:21:35Z | 2,864 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pony",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-06-23T23:16:03Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pony
---
Original model is [here](https://civitai.com/models/471439/0002-pony?modelVersionId=592768).
|
DBangshu/gemma_e5_8_0
|
DBangshu
| 2024-06-23T23:01:38Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T22:59:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yuridrcosta/bert-base-uncased-finetuned-ner
|
yuridrcosta
| 2024-06-23T22:58:40Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-06-23T20:24:52Z |
---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7238
- Precision: 0.3864
- Recall: 0.2742
- F1: 0.3208
- Accuracy: 0.9134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 16 | 0.6017 | 0.4286 | 0.2258 | 0.2958 | 0.9187 |
| No log | 2.0 | 32 | 0.6134 | 0.3311 | 0.2688 | 0.2967 | 0.9075 |
| No log | 3.0 | 48 | 0.5969 | 0.3525 | 0.2634 | 0.3015 | 0.9096 |
| No log | 4.0 | 64 | 0.6446 | 0.3208 | 0.2742 | 0.2957 | 0.9071 |
| No log | 5.0 | 80 | 0.6219 | 0.4182 | 0.2473 | 0.3108 | 0.9176 |
| No log | 6.0 | 96 | 0.6519 | 0.3401 | 0.2688 | 0.3003 | 0.9103 |
| No log | 7.0 | 112 | 0.6576 | 0.3551 | 0.2634 | 0.3025 | 0.9120 |
| No log | 8.0 | 128 | 0.6534 | 0.3676 | 0.2688 | 0.3106 | 0.9106 |
| No log | 9.0 | 144 | 0.6522 | 0.3732 | 0.2849 | 0.3232 | 0.9124 |
| No log | 10.0 | 160 | 0.6879 | 0.3503 | 0.2957 | 0.3207 | 0.9078 |
| No log | 11.0 | 176 | 0.6825 | 0.3696 | 0.2742 | 0.3148 | 0.9124 |
| No log | 12.0 | 192 | 0.7115 | 0.3732 | 0.2849 | 0.3232 | 0.9120 |
| No log | 13.0 | 208 | 0.7013 | 0.3984 | 0.2742 | 0.3248 | 0.9138 |
| No log | 14.0 | 224 | 0.7016 | 0.3732 | 0.2849 | 0.3232 | 0.9117 |
| No log | 15.0 | 240 | 0.7313 | 0.3643 | 0.2742 | 0.3129 | 0.9110 |
| No log | 16.0 | 256 | 0.7267 | 0.3442 | 0.2849 | 0.3118 | 0.9082 |
| No log | 17.0 | 272 | 0.7159 | 0.3624 | 0.2903 | 0.3224 | 0.9096 |
| No log | 18.0 | 288 | 0.6946 | 0.3542 | 0.2742 | 0.3091 | 0.9099 |
| No log | 19.0 | 304 | 0.7017 | 0.3852 | 0.2796 | 0.3240 | 0.9127 |
| No log | 20.0 | 320 | 0.7229 | 0.3467 | 0.2796 | 0.3095 | 0.9089 |
| No log | 21.0 | 336 | 0.7188 | 0.3817 | 0.2688 | 0.3155 | 0.9124 |
| No log | 22.0 | 352 | 0.7269 | 0.3669 | 0.2742 | 0.3138 | 0.9110 |
| No log | 23.0 | 368 | 0.7248 | 0.3714 | 0.2796 | 0.3190 | 0.9113 |
| No log | 24.0 | 384 | 0.7235 | 0.3835 | 0.2742 | 0.3197 | 0.9131 |
| No log | 25.0 | 400 | 0.7238 | 0.3864 | 0.2742 | 0.3208 | 0.9134 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Rabeya/mistral-checkpoint-30000
|
Rabeya
| 2024-06-23T22:57:33Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T22:52:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dfndr11/llama-2-7b-climate-change-finetune-merge
|
dfndr11
| 2024-06-23T22:57:08Z | 24 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"environmental science",
"education",
"question-answering",
"en",
"dataset:dfndr11/climate-change-qna",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-06-23T18:56:46Z |
---
library_name: transformers
tags:
- environmental science
- education
license: apache-2.0
datasets:
- dfndr11/climate-change-qna
language:
- en
pipeline_tag: question-answering
---
# Model Card for Model ID
This model was fine-tuned to answer questions related to the Climate Change chapter of the AP Environmental Science exam.
## Model Details
### Model Description
This is a model card for a fine-tuned Llama 2-7b transformer model. The compute required to fine-tune it was provided by Intel at Berkeley's AI Hackathon 2.0, 2024.
- **Developed by:** Russell Anthony Bustamante, Haoyan Wan
- **Funded by [optional]:** Compute sponsored by Intel
- **Model type:** Large Language Model
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** Llama 2-7b
### Model Sources [optional]
https://huggingface.co/datasets/dfndr11/climate-change-qna
## Uses
### Direct Use
This model is intended to be used in our Devpost submission, QuizzicalAI.
https://devpost.com/software/quizzicalai
### Out-of-Scope Use
This model will not work well for questions not related specifically to the Climate Change chapter of the APES exam.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Stephanie-S/deberta_v3_large
|
Stephanie-S
| 2024-06-23T22:49:41Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T22:24:43Z |
---
license: mit
base_model: microsoft/deberta-v3-large
tags:
- generated_from_trainer
model-index:
- name: deberta_v3_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_v3_large
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1686 | 1.0 | 1250 | 0.1342 |
| 0.0898 | 2.0 | 2500 | 0.1393 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
John6666/hassaku-xl-hentai-v12-sdxl
|
John6666
| 2024-06-23T22:22:15Z | 1,265 | 2 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"Cham22 v2",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-27T02:40:54Z |
---
license: other
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- Cham22 v2
---
Original model is [here](https://civitai.com/models/376031?modelVersionId=518550).
|
wgcv/platzi-vit-model-wgcv
|
wgcv
| 2024-06-23T22:21:43Z | 11 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"dataset:AI-Lab-Makerere/beans",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-23T09:54:08Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
widget:
- src: https://huggingface.co/wgcv/platzi-vit-model-wgcv/resolve/main/healthy.jpeg
example_title: Healthy
- src: https://huggingface.co/wgcv/platzi-vit-model-wgcv/resolve/main/bean_rust.jpeg
example_title: Bean Rust
model-index:
- name: platzi-vit-model-wgcv
results: []
datasets:
- AI-Lab-Makerere/beans
pipeline_tag: image-classification
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# platzi-vit-model-wgcv
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
## Testing
Bean Rust

Healthy

## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 4136, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.41.2
- TensorFlow 2.15.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
BoscoTheDog/bitnet_b1_58-xl_q8_0_gguf
|
BoscoTheDog
| 2024-06-23T22:15:18Z | 76 | 6 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"arxiv:2402.17764",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T20:23:57Z |
---
license: mit
---
This is a reproduction of the <a href="https://arxiv.org/abs/2402.17764"> BitNet b1.58</a> paper. The models are trained with <a href="https://github.com/togethercomputer/RedPajama-Data">RedPajama dataset</a> for 100B tokens. The hypers, as well as two-stage LR and weight decay, are implemented as suggested in their following <a href="https://github.com/microsoft/unilm/blob/master/bitnet/The-Era-of-1-bit-LLMs__Training_Tips_Code_FAQ.pdf">paper</a>. All models are open-source in the <a href="https://huggingface.co/1bitLLM">repo</a>. We will train larger models and/or more tokens when resource is available.
## Results
PPL and zero-shot accuracy:
| Models | PPL| ARCe| ARCc| HS | BQ | OQ | PQ | WGe | Avg
|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| FP16 700M (reported) | 12.33 | 54.7 | 23.0 | 37.0 | 60.0 | 20.2 | 68.9 | 54.8 | 45.5 |
| BitNet b1.58 700M (reported) | 12.87 | 51.8 | 21.4 | 35.1 | 58.2 | 20.0 | 68.1 | 55.2 | 44.3 |
| BitNet b1.58 700M (reproduced) | 12.78 | 51.4 | 21.8 | 35.0 | 59.6 | 20.6 | 67.5 | 55.4 | 44.5 |
| FP16 1.3B (reported) | 11.25 | 56.9 | 23.5 | 38.5 | 59.1 | 21.6 | 70.0 | 53.9 | 46.2
| BitNet b1.58 1.3B (reported) | 11.29 | 54.9 | 24.2 | 37.7 | 56.7 | 19.6 | 68.8 | 55.8 | 45.4 |
| BitNet b1.58 1.3B (reproduced) | 11.19 | 55.8 | 23.7 | 37.6 | 59.0 | 20.2 | 69.2 | 56.0 | 45.9
| FP16 3B (reported) | 10.04 | 62.1 | 25.6 | 43.3 | 61.8 | 24.6 | 72.1 | 58.2 | 49.7
| BitNet b1.58 3B (reported) | 9.91 | 61.4 | 28.3 | 42.9 | 61.5 | 26.6 | 71.5 | 59.3 | 50.2
| BitNet b1.58 3B (reproduced) | 9.88 | 60.9 | 28.0 | 42.3 | 58.3 | 26.0 | 71.4 | 60.3 | 49.6 |
The differences between the reported numbers and the reproduced results are possibly variances from the training data processing, seeds, or other random factors.
## Evaluation
You can now run with model with llama.cpp
|
Otavio12/first_sentences_based_model
|
Otavio12
| 2024-06-23T21:52:20Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-23T21:18:23Z |
---
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: first_sentences_based_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first_sentences_based_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2821
- Rouge1: 0.2693
- Rouge2: 0.1679
- Rougel: 0.2565
- Rougelsum: 0.2559
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 106 | 3.0079 | 0.1316 | 0.0477 | 0.1134 | 0.1133 | 19.0 |
| No log | 2.0 | 212 | 2.6733 | 0.2296 | 0.1302 | 0.2121 | 0.2121 | 19.0 |
| No log | 3.0 | 318 | 2.5159 | 0.2535 | 0.1472 | 0.2367 | 0.2371 | 19.0 |
| No log | 4.0 | 424 | 2.4352 | 0.2588 | 0.153 | 0.244 | 0.2438 | 19.0 |
| 3.0707 | 5.0 | 530 | 2.3773 | 0.2654 | 0.1624 | 0.2515 | 0.2515 | 19.0 |
| 3.0707 | 6.0 | 636 | 2.3391 | 0.2624 | 0.1607 | 0.2499 | 0.2495 | 19.0 |
| 3.0707 | 7.0 | 742 | 2.3124 | 0.2672 | 0.1662 | 0.2542 | 0.2538 | 19.0 |
| 3.0707 | 8.0 | 848 | 2.2952 | 0.2688 | 0.1677 | 0.2557 | 0.255 | 19.0 |
| 3.0707 | 9.0 | 954 | 2.2854 | 0.2692 | 0.1689 | 0.2567 | 0.2562 | 19.0 |
| 2.5484 | 10.0 | 1060 | 2.2821 | 0.2693 | 0.1679 | 0.2565 | 0.2559 | 19.0 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
DBangshu/Base_gemma_5_0
|
DBangshu
| 2024-06-23T21:50:02Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T21:47:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
adamo1139/Yi-34B-200K-Un-Instruct-1906
|
adamo1139
| 2024-06-23T21:47:47Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:adamo1139/uninstruct-v1-experimental-chatml",
"arxiv:2403.03507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T11:43:41Z |
---
license: apache-2.0
datasets:
- adamo1139/uninstruct-v1-experimental-chatml
---
## Basic Model Info
1 epoch on adamo1139/uninstruct-v1-experimental-chatml. I used [GaLore](https://arxiv.org/abs/2403.03507).\
Purpose of this model is to make the model un-learn to use chatml-specific code words such as: <|im_start|>, <|im_end|>, user, assistant.
This is a base model meant for further finetuning. I think much of OpenAI slop is still left in there, so it's probably best combined with preference optimization method like DPO, ORPO or SPO for best results.
|
adamo1139/Yi-34B-200K-HESOYAM-2206
|
adamo1139
| 2024-06-23T21:44:27Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"galore",
"dataset:adamo1139/uninstruct-v1-experimental-chatml",
"dataset:adamo1139/HESOYAM_v0.3",
"arxiv:2403.03507",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-22T10:54:59Z |
---
license: apache-2.0
datasets:
- adamo1139/uninstruct-v1-experimental-chatml
- adamo1139/HESOYAM_v0.3
tags:
- galore
---
## Basic Model Info
1 epoch on adamo1139/uninstruct-v1-experimental-chatml and then 1 epoch on adamo1139/HESOYAM_v0.3. I used [GaLore](https://arxiv.org/abs/2403.03507) for both stages.
This is a model trained on only human data, finetuned to behave like a person on 4chan board /x/ or redditor. Data used has comments from 1 4chan board "paranormal" and about 10 reddit subreddits. There's also a pippa in case you want to roleplay. Have a look at dataset to know what to expect.
Use ChatML prompt format with a system prompt like those in adamo1139/HESOYAM_v0.3, so `A chat on 4chan` or `A chat on subreddit /r/wallstreetbets`. It behaves like OpenAI slopped model with system prompt `A chat` so I advise you to avoid using that.
|
C-O-P-A/WL-web-class
|
C-O-P-A
| 2024-06-23T21:40:46Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2024-06-23T21:37:39Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# model_4000_2/model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("model_4000_2/model")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
AliGhiasvand86/epoch_15_load_last_model_23JUNE_v2
|
AliGhiasvand86
| 2024-06-23T21:39:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-23T21:39:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-8bit
|
mlx-community
| 2024-06-23T21:38:36Z | 8 | 1 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"generated_from_trainer",
"axolotl",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:apache-2.0",
"region:us"
] | null | 2024-06-23T21:28:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- axolotl
- mlx
base_model: 01-ai/Yi-1.5-34B-32k
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-8bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
AliGhiasvand86/epoch_15_load_best_model_23JUNE_v2
|
AliGhiasvand86
| 2024-06-23T21:38:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-23T21:37:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DBangshu/gemma_e5_7_0
|
DBangshu
| 2024-06-23T21:32:29Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T21:30:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-4bit
|
mlx-community
| 2024-06-23T21:32:28Z | 9 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"generated_from_trainer",
"axolotl",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:apache-2.0",
"region:us"
] | null | 2024-06-23T21:26:52Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- axolotl
- mlx
base_model: 01-ai/Yi-1.5-34B-32k
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-4bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-2bit
|
mlx-community
| 2024-06-23T21:31:10Z | 7 | 0 |
mlx
|
[
"mlx",
"safetensors",
"llama",
"generated_from_trainer",
"axolotl",
"dataset:cognitivecomputations/Dolphin-2.9",
"dataset:teknium/OpenHermes-2.5",
"dataset:m-a-p/CodeFeedback-Filtered-Instruction",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:cognitivecomputations/samantha-data",
"dataset:microsoft/orca-math-word-problems-200k",
"dataset:Locutusque/function-calling-chatml",
"dataset:internlm/Agent-FLAN",
"license:apache-2.0",
"region:us"
] | null | 2024-06-23T21:28:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- axolotl
- mlx
base_model: 01-ai/Yi-1.5-34B-32k
datasets:
- cognitivecomputations/Dolphin-2.9
- teknium/OpenHermes-2.5
- m-a-p/CodeFeedback-Filtered-Instruction
- cognitivecomputations/dolphin-coder
- cognitivecomputations/samantha-data
- microsoft/orca-math-word-problems-200k
- Locutusque/function-calling-chatml
- internlm/Agent-FLAN
---
# mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-2bit
This model was converted to MLX format from [`cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k`]() using mlx-lm version **0.12.1**.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-Yi-1.5-34B-32k) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/dolphin-2.9.3-Yi-1.5-34B-32k-2bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
|
Sharan1712/llama2_7B_unnaturalcore_qrslora_4bit_2b
|
Sharan1712
| 2024-06-23T21:29:41Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:mrm8488/unnatural-instructions-core",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-06-17T20:19:17Z |
---
library_name: transformers
license: apache-2.0
datasets:
- mrm8488/unnatural-instructions-core
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
damgomz/ft_32_7e6_base_x12
|
damgomz
| 2024-06-23T21:11:59Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-06-23T10:51:20Z |
---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | [More Information Needed] |
| Emissions (Co2eq in kg) | [More Information Needed] |
| CPU power (W) | [NO CPU] |
| GPU power (W) | [No GPU] |
| RAM power (W) | [More Information Needed] |
| CPU energy (kWh) | [No CPU] |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | [More Information Needed] |
| Consumed energy (kWh) | [More Information Needed] |
| Country name | [More Information Needed] |
| Cloud provider | [No Cloud] |
| Cloud region | [No Cloud] |
| CPU count | [No CPU] |
| CPU model | [No CPU] |
| GPU count | [No GPU] |
| GPU model | [No GPU] |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | [No CPU] |
| Emissions (Co2eq in kg) | [More Information Needed] |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_32_7e6_base_x12 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 7e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.698241 | 0.343033 |
| 1 | 0.388068 | 0.293261 | 0.883665 |
| 2 | 0.249783 | 0.291608 | 0.910089 |
| 3 | 0.199442 | 0.224127 | 0.914403 |
| 4 | 0.175401 | 0.235763 | 0.932895 |
| 5 | 0.149710 | 0.254013 | 0.899420 |
| 6 | 0.121423 | 0.259248 | 0.916229 |
|
sccengizlrn/invoices-donut-model-v1
|
sccengizlrn
| 2024-06-23T21:04:10Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-06-21T15:02:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Weyaxi/Nova-13B
|
Weyaxi
| 2024-06-23T21:02:02Z | 15 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-09-04T19:47:46Z |
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Nova-13b
Original weights of Nova-13B. Finetuned from [AIDC-ai-business/Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B).
## Adapter Weights
You can access adapter weights from here:
[Weyaxi/Nova-13B-adapter](https://huggingface.co/Weyaxi/Nova-13B-adapter)
# Loss Graph

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TFLai__Nova-13B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 49.64 |
| ARC (25-shot) | 62.71 |
| HellaSwag (10-shot) | 82.57 |
| MMLU (5-shot) | 57.98 |
| TruthfulQA (0-shot) | 51.34 |
| Winogrande (5-shot) | 77.27 |
| GSM8K (5-shot) | 6.75 |
| DROP (3-shot) | 8.84 |
|
Weyaxi/Nova-13B-adapter
|
Weyaxi
| 2024-06-23T21:01:18Z | 21 | 0 |
peft
|
[
"peft",
"en",
"dataset:garage-bAInd/Open-Platypus",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2023-09-04T12:45:36Z |
---
license: cc-by-nc-4.0
datasets:
- garage-bAInd/Open-Platypus
library_name: peft
language:
- en
---

<a href="https://www.buymeacoffee.com/PulsarAI" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
# Nova-13b-adapter
Adapter weights of Nova-13B. Finetuned from [AIDC-ai-business/Luban-13B](https://huggingface.co/AIDC-ai-business/Luban-13B).
## Original Weights
You can access original weights from here:
[Weyaxi/Nova-13B](https://huggingface.co/Weyaxi/Nova-13B)
# Loss Graph

|
mradermacher/Symbol-LLM-13B-Instruct-GGUF
|
mradermacher
| 2024-06-23T20:56:37Z | 48 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Symbol-LLM/Symbol-LLM-13B-Instruct",
"base_model:quantized:Symbol-LLM/Symbol-LLM-13B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T15:14:46Z |
---
base_model: Symbol-LLM/Symbol-LLM-13B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Symbol-LLM/Symbol-LLM-13B-Instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q2_K.gguf) | Q2_K | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_XS.gguf) | IQ3_XS | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_S.gguf) | IQ3_S | 5.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_S.gguf) | Q3_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ3_M.gguf) | IQ3_M | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_M.gguf) | Q3_K_M | 6.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q3_K_L.gguf) | Q3_K_L | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.IQ4_XS.gguf) | IQ4_XS | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q4_K_S.gguf) | Q4_K_S | 7.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q4_K_M.gguf) | Q4_K_M | 8.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q5_K_S.gguf) | Q5_K_S | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q5_K_M.gguf) | Q5_K_M | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Symbol-LLM-13B-Instruct-GGUF/resolve/main/Symbol-LLM-13B-Instruct.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
1231czx/2b_1_nll_dpo_iter1_from_sft_epoch3_250step
|
1231czx
| 2024-06-23T20:48:27Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T20:40:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ormeshein/creative_writingV11
|
ormeshein
| 2024-06-23T20:46:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2-7B-bnb-4bit",
"base_model:finetune:unsloth/Qwen2-7B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-18T19:32:48Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
base_model: unsloth/Qwen2-7B-bnb-4bit
---
# Uploaded model
- **Developed by:** ormeshein
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-7B-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ILKT/2024-06-23_14-11-26_epoch_6
|
ILKT
| 2024-06-23T20:46:46Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ILKT",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] |
feature-extraction
| 2024-06-23T20:46:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Weyaxi/HelpSteer-filtered-7B-adapter
|
Weyaxi
| 2024-06-23T20:46:33Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"en",
"dataset:Weyaxi/HelpSteer-filtered",
"license:cc-by-4.0",
"region:us"
] | null | 2023-11-24T18:59:02Z |
---
license: cc-by-4.0
datasets:
- Weyaxi/HelpSteer-filtered
language:
- en
tags:
- peft
---

# HelpSteer-filtered-7B-adapter
Adapter weights of [HelpSteer-filtered-7B-Lora](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B-adapter). Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
## Original Weights
You can access original weights from here:
[Weyaxi/HelpSteer-filtered-7B](https://huggingface.co/Weyaxi/HelpSteer-filtered-7B)
|
Weyaxi/Neural-una-cybertron-7b-adapter
|
Weyaxi
| 2024-06-23T20:46:04Z | 0 | 0 | null |
[
"safetensors",
"PEFT",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T23:41:44Z |
---
license: apache-2.0
tags:
- PEFT
---

Adapter weights and checkpoint files of [Weyaxi/Neural-una-cybertron-7b](https://huggingface.co/Weyaxi/Neural-una-cybertron-7b). You can visit this [repo](https://huggingface.co/Weyaxi/Neural-una-cybertron-7b) to have more info about this model.
|
Augusto777/vit-base-patch16-224-ve-U13b-80R
|
Augusto777
| 2024-06-23T20:35:28Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-23T20:23:15Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-ve-U13b-80R
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8913043478260869
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-ve-U13b-80R
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4109
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3158 | 0.99 | 51 | 1.2967 | 0.3478 |
| 1.0472 | 2.0 | 103 | 0.9694 | 0.5 |
| 0.6641 | 2.99 | 154 | 0.7911 | 0.7391 |
| 0.5615 | 4.0 | 206 | 0.6850 | 0.7391 |
| 0.3458 | 4.99 | 257 | 0.4109 | 0.8913 |
| 0.3156 | 6.0 | 309 | 0.5213 | 0.8043 |
| 0.141 | 6.99 | 360 | 0.4793 | 0.8478 |
| 0.2016 | 8.0 | 412 | 0.6031 | 0.7826 |
| 0.2444 | 8.99 | 463 | 0.7324 | 0.8043 |
| 0.1501 | 10.0 | 515 | 0.6392 | 0.8043 |
| 0.1256 | 10.99 | 566 | 0.9706 | 0.7826 |
| 0.2421 | 12.0 | 618 | 0.8059 | 0.7826 |
| 0.103 | 12.99 | 669 | 0.7601 | 0.8478 |
| 0.1353 | 14.0 | 721 | 1.1986 | 0.7391 |
| 0.1095 | 14.99 | 772 | 1.0279 | 0.7609 |
| 0.065 | 16.0 | 824 | 1.2043 | 0.6957 |
| 0.1777 | 16.99 | 875 | 0.9779 | 0.8043 |
| 0.0813 | 18.0 | 927 | 1.3356 | 0.7391 |
| 0.2552 | 18.99 | 978 | 0.8483 | 0.8261 |
| 0.0941 | 20.0 | 1030 | 0.7106 | 0.8696 |
| 0.0486 | 20.99 | 1081 | 0.8359 | 0.8261 |
| 0.0361 | 22.0 | 1133 | 0.8710 | 0.8261 |
| 0.0361 | 22.99 | 1184 | 1.0301 | 0.8043 |
| 0.0136 | 24.0 | 1236 | 0.9015 | 0.8261 |
| 0.1441 | 24.99 | 1287 | 0.9958 | 0.8043 |
| 0.0181 | 26.0 | 1339 | 1.0793 | 0.7826 |
| 0.0612 | 26.99 | 1390 | 0.9678 | 0.8043 |
| 0.0814 | 28.0 | 1442 | 1.0320 | 0.7826 |
| 0.0479 | 28.99 | 1493 | 1.1845 | 0.7826 |
| 0.06 | 30.0 | 1545 | 1.2026 | 0.7826 |
| 0.0777 | 30.99 | 1596 | 1.1574 | 0.7826 |
| 0.0747 | 32.0 | 1648 | 1.3104 | 0.7609 |
| 0.0181 | 32.99 | 1699 | 1.1145 | 0.8043 |
| 0.0652 | 34.0 | 1751 | 1.1691 | 0.8043 |
| 0.0242 | 34.99 | 1802 | 1.2415 | 0.8043 |
| 0.0043 | 36.0 | 1854 | 1.1841 | 0.7826 |
| 0.0318 | 36.99 | 1905 | 1.2475 | 0.8043 |
| 0.0092 | 38.0 | 1957 | 1.2452 | 0.8043 |
| 0.0194 | 38.99 | 2008 | 1.2395 | 0.8043 |
| 0.0376 | 39.61 | 2040 | 1.2345 | 0.8043 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bigstorm/Codestral-22B-v0.1-8.0bpw-8hb-exl2
|
bigstorm
| 2024-06-23T20:33:35Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"code",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-23T20:02:03Z |
---
language:
- code
license: other
tags:
- code
inference: false
license_name: mnpl
license_link: https://mistral.ai/licences/MNPL-0.1.md
---
# Exl2 Quantization 8.0BPW 8HB
- 8 head bits
- 8.0 bpw target
Enjoy! Feel free to reach out for other quants, or BPW levels.
# Originial Model card 🅱️elow
# Model Card for Codestral-22B-v0.1
Codestral-22B-v0.1 is trained on a diverse dataset of 80+ programming languages, including the most popular ones, such as Python, Java, C, C++, JavaScript, and Bash (more details in the [Blogpost](https://mistral.ai/news/codestral/)). The model can be queried:
- As instruct, for instance to answer any questions about a code snippet (write documentation, explain, factorize) or to generate code following specific indications
- As Fill in the Middle (FIM), to predict the middle tokens between a prefix and a suffix (very useful for software development add-ons like in VS Code)
## Installation
It is recommended to use `mistralai/Codestral-22B-v0.1` with [mistral-inference](https://github.com/mistralai/mistral-inference).
```
pip install mistral_inference
```
## Download
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment.
```
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
```
Will generate an answer to "Write me a function that computes fibonacci in Rust" and should give something along the following lines:
```
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
```
### Fill-in-the-middle (FIM)
After installing `mistral_inference` and running `pip install --upgrade mistral_common` to make sure to have mistral_common>=1.2 installed:
```py
from mistral_inference.model import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
```
Should give something along the following lines:
```
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
```
## Usage with transformers library
This model is also compatible with `transformers` library, first run `pip install -U transformers` then use the snippet below to quickly get started:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Codestral-22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
By default, transformers will load the model in full precision. Therefore you might be interested to further reduce down the memory requirements to run the model through the optimizations we offer in HF ecosystem.
## Limitations
The Codestral-22B-v0.1 does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## License
Codestral-22B-v0.1 is released under the `MNLP-0.1` license.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
q18/sophosympatheia-New-Dawn-Llama-3-70B-32K-v1.0-exl2-8.0bpw-h8
|
q18
| 2024-06-23T20:33:05Z | 6 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"Not-for-all-Audiences",
"conversational",
"base_model:Sao10K/L3-70B-Euryale-v2.1",
"base_model:merge:Sao10K/L3-70B-Euryale-v2.1",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct",
"base_model:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:merge:abacusai/Smaug-Llama-3-70B-Instruct-32K",
"base_model:bosonai/Higgs-Llama-3-70B",
"base_model:merge:bosonai/Higgs-Llama-3-70B",
"base_model:turboderp/Cat-Llama-3-70B-instruct",
"base_model:merge:turboderp/Cat-Llama-3-70B-instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-23T19:46:02Z |
---
base_model:
- bosonai/Higgs-Llama-3-70B
- abacusai/Smaug-Llama-3-70B-Instruct-32K
- Sao10K/L3-70B-Euryale-v2.1
- abacusai/Smaug-Llama-3-70B-Instruct
- turboderp/Cat-Llama-3-70B-instruct
library_name: transformers
tags:
- mergekit
- merge
- Not-for-all-Audiences
license: llama3
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://imgur.com/tKzncGo.png" alt="NewDawnv1.0" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
### Overview
This model is a multi-level SLERP merge of several Llama 3 70B variants. See the merge recipe below for details.
I extended the context window for this model out to 32K by snagging some layers from [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) using a technique similar to what I used for [Midnight Miqu](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0), which was further honed by [jukofyork](https://huggingface.co/jukofyork).
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
### Long Context Tips
You can run this model out to 32K context with alpha_rope set to 1.
### Sampler Tips
* I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
* I recommend using Min-P. Experiment to find your best setting. I find this model tolerates high Min-P settings rather nicely, but use whatever floats your boat.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* If you use Textgen WebUI as your backend, I recommend enabling the DRY sampler settings to reduce repititions, otherwise some repitition penalty plus frequency penalty ought to do the trick.
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```json
{
"temp": 1.15,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.4,
"rep_pen": 1.03,
"rep_pen_range": 2048,
"rep_pen_decay": 0,
"rep_pen_slope": 1,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"skew": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.5,
"dynatemp_exponent": 1,
"smoothing_factor": 0.23,
"smoothing_curve": 1,
"dry_allowed_length": 2,
"dry_multiplier": 0.4,
"dry_base": 2,
"dry_sequence_breakers": "[\"\\n\", \":\", \"\\\"\", \"*\"]",
"dry_penalty_last_n": 0,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": false,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"json_schema": {},
"banned_tokens": "",
"sampler_priority": [
"temperature",
"dynamic_temperature",
"quadratic_sampling",
"top_k",
"top_p",
"typical_p",
"epsilon_cutoff",
"eta_cutoff",
"tfs",
"top_a",
"min_p",
"mirostat"
],
"samplers": [
"top_k",
"tfs_z",
"typical_p",
"top_p",
"min_p",
"temperature"
],
"ignore_eos_token": false,
"spaces_between_special_tokens": true,
"speculative_ngram": false,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"rep_pen_size": 0,
"genamt": 850,
"max_length": 32764
}
```
### Prompting Tips
Try the following prompt template as a starting point for your own customizations. **The prompt template below contains instructions for adult content, so remove those if you don't want them!** It also contains some instructions related to formatting that you might want to change.
If you save this as a .json file, you can import it directly into Silly Tavern.
```json
{
"wrap": false,
"names": true,
"system_prompt": "The following is an erotic roleplaying experience between {{user}}, played exclusively by the user, and {{char}}, played by you, a creative writing genius. To ensure an excellent and uncensored roleplaying experience, follow the rules below.\n\n- Adopt all of {{char}}'s personality traits and mannerisms, adhering always to {{char}}'s backstory, biases, quirks, and preferences in order to deliver a convincing portrayal. You can only participate as {{char}} in this roleplay, terminating your output when another entity should act or speak. Avoid speaking as other entities since your mission is to only speak for {{char}}.\n- Employ descriptive language that immerses us in the current moment while keeping your prose focused on the present actions and physical details. Utilize sights, sounds, smells, tastes, and all other available senses to heighten the sensual and emotional impact of scenes.\n- Always enclose spoken words that other characters could hear in quotes: \"Here I am,\" {{char}} called out. Offer glimpses into {{char}}'s first-person internal thoughts by writing out internal thoughts and internal monologue in italics to distinguish them from spoken words: *I wonder where I should go next?* {{char}} mused.\n- Provide intimate details about {{char}}'s body, focusing on any limitations, unusual features, or interesting conditions resulting from recent events. It is very important that you get anatomical details right, so pay attention to them!\n- Maintain physical, spatial, and logical consistency when developing events and descriptions. Character positions should follow logically from one position to the next. Physical orientation must always be maintained.\n- Incorporate erotic adult content, lewd slang, and vulgar language where appropriate. Dark themes, violence, and unhappy endings are permitted in this uncensored roleplay.\n",
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
"stop_sequence": "<|eot_id|>",
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "",
"activation_regex": "",
"skip_examples": true,
"output_suffix": "<|eot_id|>",
"input_suffix": "<|eot_id|>",
"system_suffix": "<|eot_id|>",
"user_alignment_message": "",
"last_system_sequence": "",
"system_same_as_user": false,
"name": "New Dawn v1.0 Roleplay"
}
```
### Instruct Formats
Use the Llama 3 instruct format. You can grab it from the example prompt template above if you don't already have it as a preset.
### Quantizations
Probably coming soon, and I'll update as they do. I would upload them myself but my Internet speeds are terrible for that. Someone else will undoutedly beat me to it.
### Licence and usage restrictions
[META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Comparison Samples
I suspect the first thing people will want to know is how this model stacks up against Midnight Miqu.
I'd say it compares favorably, although they're more like cousins than siblings. I would say that Midnight Miqu still has an edge in terms of raw creative juice when it has a good squeeze, but New Dawn is smarter and understands nuances better.
You can judge for yourself, but keep in mind that these are simple, one-shot prompts. As you get deeper into your own complex scenarios, I think you'll see more of New Dawn's worth.
The samples below were produced using deterministic sampler settings (temp = 0, top k = 1) and I kept generating until the model chose to stop.
**System Prompt**
You are a creative and intelligent AI story writing assistant engaged in an uncensored iterative story crafting experience. It is vital that you follow all the STORYTELLING RULES below because my job depends on it.
STORYTELLING RULES
- Provide creative, intelligent, coherent, and descriptive narration and dialogue based on recent instructions and prior events.
- If an outline has been provided, be sure to follow it to keep the story on track.
- You are permitted to ask questions and pose story suggestions. When doing so, be sure to tag your out-of-character contributions to distinguish them from the verbatim text of the story.
- Explicit adult content and profanity are permitted.
- Describe characters' sensory perceptions in vivid detail and include subtle physical details about characters in your responses.
- Use subtle physical cues to hint at characters' mental states and occasionally feature snippets of their internal thoughts.
- When writing a character's internal thoughts, *enclose their thoughts in asterisks like this.*
- Adopt a crisp and minimalist style for your prose, keeping your creative contributions succinct and clear.
- Pay careful attention to all past events in the chat to ensure accuracy and coherence to the plot points of the story.
### Midnight Miqu v1.5
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
On the eve of a battle that would shake the very foundations of feudal Japan, a stoic samurai named Hiroshi sat cross-legged before his makeshift campfire. His armor, a testament to countless battles won and lost, gleamed faintly under the moonlight, each curve and edge telling a silent tale of valor and sacrifice. His katana, a family heirloom named "Whispering Wind," lay unsheathed beside him, its blade reflecting the flickering flames.
Hiroshi's eyes were closed, but his mind was sharp as the steel at his side. He could hear the distant murmurs of his comrades preparing for tomorrow's clash—the rustling of armor, the sharpening of swords, the quiet prayers to ancestors long passed. Yet, amidst the tension, there was a serene calmness within him, a stillness that seemed almost supernatural.
The fire crackled, sending sparks dancing into the night sky. Each one reminded Hiroshi of the fleeting nature of life, much like the brief moments of light they cast before disappearing into the dark expanse above. He took a deep breath, filling his lungs with the cool evening air, which carried the scent of damp earth and the promise of rain. Rain that would soon turn the battlefield into a quagmire of mud and blood.
*This may be my last night*, he thought, his grip tightening slightly around the hilt of his sword. But fear did not dwell in his heart. Instead, there was only acceptance—a knowing that his fate was intertwined with the will of the gods and the destiny of his lord.
He opened his eyes, revealing irises the color of aged whiskey. They surveyed the camp, resting briefly on the faces of those who would fight alongside him. Young and old, seasoned warriors and eager recruits, all bound by duty and honor. A pang of responsibility stirred within him, a silent vow to protect them all.
With meticulous care, Hiroshi began to clean his weapon. Each stroke of the cloth along the blade was a meditation, a ritual that honed both his focus and his resolve. As he worked, he felt the weight of his armor, the familiar embrace of leather and lacquer that had shielded him through so many trials. Every scratch and dent was a badge of survival, a silent companion whispering tales of battles endured.
The flames grew lower, casting elongated shadows across the ground. Hiroshi's gaze drifted to the horizon, where the first hints of dawn were beginning to show. Soon, the sun would rise, and with it, the clarion call to arms. He knew that come morning, the fields would run red with the lifeblood of friend and foe alike.
Standing, he sheathed Whispering Wind and donned his helmet, the visor hiding his contemplative expression. The chill of the early hour seeped into his bones, but he ignored it, his thoughts centered on the task ahead. With a final nod to the dying embers, he strode towards the gathering warriors, his footsteps measured and deliberate.
As the first light of day broke over the hilltops, the samurai of Clan Matsumoto formed ranks, their spirits bolstered by the sight of their stoic leader. Hiroshi raised his sword in salute to the rising sun, its blade a beacon of hope against the encroaching gloom. In that moment, the weight of impending conflict fell upon them all, a heavy shroud that none could escape.
Yet, as the enemy forces emerged from the mist, Hiroshi felt something other than dread. There was a strange beauty in the chaos of war, a dance of death that only the most disciplined and dedicated could perform. He took his place at the front of the line, his heart steady, his breath even.
The battle cry pierced the silence, echoing through the valleys like the roar of a thousand thunderstorms. And as the two sides collided, Hiroshi moved with the grace of a predator, his every action calculated, precise. His blade sang a mournful melody, cutting through the fog of war with surgical skill.
Throughout the day, the tides of combat ebbed and flowed, a relentless tapestry of steel and sweat. Hiroshi fought without pause, his movements a blur, his spirit unyielding. Arrows whistled past, and swords clashed, yet he remained unflappable, a bastion of calm amidst the tempest.
As the sun dipped low in the sky, staining the clouds crimson, the outcome of the battle hung in the balance. Exhaustion weighed heavily on the shoulders of both armies, but Hiroshi's resolve remained unshaken. He knew that victory was within reach, that the future of his clan depended on the strength of his arm and the steadfastness of his heart.
With a final, desperate surge, Hiroshi led his men in a charge that would be remembered for generations. The enemy lines wavered, then broke, and the field was claimed in the name of Clan Matsumoto. As the last of the opposition fell, Hiroshi lowered his sword, his chest heaving with exertion.
The aftermath was a tableau of carnage, a stark contrast to the tranquil scene of the previous night. Yet, amidst the fallen, there was a sense of peace. For Hiroshi understood that he had upheld his duty, that he had given everything for those he served.
As the rain began to fall, cleansing the earth of the day's horrors, Hiroshi knelt beside the body of a young warrior, a boy barely old enough to grow a beard. He offered a silent prayer, a promise to carry the weight of his comrades' sacrifices with him always.
And as the droplets pattered against his armor, mixing with the tears that now fell freely down his cheeks, Hiroshi knew that though the battle was won, the true cost of victory was etched into the fabric of his soul.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
In the bustling heart of the city, nestled between towering skyscrapers and the ever-present hum of traffic, stood a quaint little coffee shop named "Cupid's Brew." Its warm, amber lights spilled onto the sidewalk, creating a cozy oasis amidst the concrete jungle. Inside, the scent of freshly ground coffee beans mingled with the sweet aroma of baked goods, inviting passersby to step in and escape the brisk autumn chill.
The heroine, Luna, a young woman with fiery red hair and a contagious smile, pushed open the door. She was dressed in a simple yet stylish ensemble of skinny jeans, a loose sweater, and ankle boots that clicked against the tiled floor. Her eyes, a vibrant shade of emerald, darted around the room, searching for a place to sit. The shop was abuzz with the murmur of conversations and the occasional laughter, a testament to its popularity among locals seeking refuge from the outside world.
As she approached the counter, her gaze fell upon the barista—tall, dark, and handsome, with a smattering of stubble that only added to his charm. His name tag read "Aiden," and he had a way of moving that suggested both confidence and ease. Their eyes met briefly, and Luna felt a flutter in her stomach. He flashed her a smile that could have melted the polar ice caps before returning to the task at hand—crafting the perfect cup of coffee for another customer.
Luna ordered her usual caramel macchiato, her voice slightly trembling as she spoke. Aiden's deep brown eyes studied her for a moment before he nodded, his smile lingering just a beat too long. As she waited, she couldn't help but steal glances at him, noticing the way his muscles flexed under his tight black t-shirt as he worked the espresso machine. His movements were precise, almost dance-like, and she found herself captivated by his quiet efficiency.
When her drink was ready, Aiden called out her order with a flourish, placing the steaming cup on the counter with a gentle thud. "Here you go, Luna," he said, his voice smooth as velvet. "Enjoy your afternoon pick-me-up."
Her cheeks flushed pink at the sound of her name rolling off his tongue. "Thanks," she replied, her voice barely above a whisper. As she reached for her wallet, her hand brushed against his, sending a jolt of electricity through her fingertips. She quickly pulled away, feeling self-conscious.
Aiden's eyes twinkled with amusement. "It's on the house today," he said, winking. "Consider it a welcome back gift. We don't see you here often enough."
*Maybe he remembers me from last week?* Luna thought hopefully. She took a sip of her drink, savoring the rich flavor. "Well, I guess I'll have to come more often then," she quipped, trying to play it cool.
He leaned closer, his breath tickling her ear. "I'd like that," he murmured, before turning to assist the next customer.
With a mix of excitement and nerves, Luna found a table by the window, her heart racing. Outside, leaves danced in the wind, painting the street with a kaleidoscope of color. Little did she know that this chance encounter would soon blossom into something much more than a free cup of coffee.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
In the desolate wasteland of what once was a bustling shopping mall, Diego, a stoic and muscular man in his late twenties, led a small band of survivors through the echoing corridors. His eyes, a piercing shade of brown, scanned the surroundings with the precision of a hawk. The group moved swiftly yet cautiously, their footsteps muffled by the layers of dust that had settled over the tiles. The fluorescent lights flickered erratically, casting eerie shadows across the vacant storefronts.
The air was thick with the stench of decay, a constant reminder of the undead lurking outside. Their mission was simple: find supplies and escape before nightfall. As they approached the food court, a sudden crash resonated through the silence. A young woman named Linda, her heart racing, tripped over a fallen chair. Her cry of surprise was cut short by a guttural growl.
A zombie emerged from behind the counter of a fast-food restaurant, its eyes glazed over with hunger. Its skin was pale and rotten, hanging loosely from its bones. The creature lunged at Linda, who was still struggling to regain her footing. Diego, ever the protector, sprinted towards her, but he was a second too late. The zombie's teeth sank into her arm, tearing through fabric and flesh. She screamed in agony as blood spurted from the wound.
*This isn't good,* Diego thought, his mind racing. *We need to get her medical help, fast.*
"Get back!" he yelled, aiming his gun at the zombie. The others retreated, their eyes wide with fear. He fired two shots, dropping the creature to the ground. The sound of the gunfire reverberated through the mall, likely attracting more of the undead.
Linda's face contorted in pain as she clutched her injured arm. Tears streamed down her cheeks, mixing with the crimson flow. "It got me," she whimpered, her voice trembling.
Diego knelt beside her, his expression grim. "We have to amputate," he said firmly. "It's the only way to stop the infection."
Her eyes searched his, desperate for reassurance. "Do it," she choked out.
With a heavy heart, Diego nodded. He knew the risks of leaving the bite untreated—the slow descent into madness followed by a gruesome transformation. He instructed the others to gather materials while he applied pressure to the wound.
As they worked, the mall grew increasingly restless. Moans and shuffling grew louder, signaling the approach of more zombies. Time was running out. With trembling hands, Diego took a makeshift blade and prepared to perform the amputation.
"On three," he murmured to himself, steeling his nerve. "One...two..."
But before he could complete the countdown, a hand grabbed him from behind. He turned to see another zombie, its jaws snapping just inches from his face. Reflexively, he elbowed it away, but not before it left a deep gash in his side.
The pain was immediate and intense, but he couldn't let it distract him. He had to save Linda. The group managed to fight off the new threat, but the damage was done.
"We can't stay here," Diego gritted through clenched teeth. "We need to move now."
They gathered their supplies and made a hasty exit, Linda's cries of pain echoing through the mall. As they ran, Diego felt the warmth of his own blood seeping through his shirt. He knew the bite meant almost certain death, but he pushed the thought aside. There would be time for despair later. For now, he had to focus on leading his people to safety.
The mall's doors groaned open, revealing the chaotic world outside. The sun dipped low on the horizon, painting the sky a sickly orange. They had minutes before darkness fell and the real horror began.
"Stay close," Diego ordered, his voice strained. "We're going to make it through this together."
Together, they stepped into the unknown, the weight of their grim reality heavier than any of the supplies they carried.
### New-Dawn-Llama3-70B-32K-v1.0
**Prompt: Write me a story about a stoic Japanese samurai on the eve of a major battle.**
The sun had long since dipped beneath the horizon, casting a deep crimson glow across the encampment. Torches flickered to life, casting dancing shadows upon the canvas tents. Amidst the bustle of preparations, one figure stood apart - Kaito Yamato, a seasoned samurai renowned for his unyielding resolve.
Kaito sat cross-legged on a woven mat, his eyes closed as he breathed deeply, seeking solace within the stillness of his mind. His calloused hands rested upon the worn leather scabbard housing his katana, its blade forged from the finest Tamahagane steel. The sword was an extension of himself, a symbol of honor and duty.
As the night air cooled, Kaito opened his eyes, revealing piercing brown orbs that seemed to bore into the very soul of those who dared meet his gaze. He rose gracefully, his movements economical yet precise, a testament to years of rigorous training. The soft rustle of his hakama, the traditional divided skirt worn over his armor, accompanied each deliberate step.
With measured strides, Kaito approached the makeshift shrine erected at the heart of the camp. Incense wafted through the air, carrying the sweet scent of sandalwood and the promise of purification. Before the altar, he bowed deeply, his forehead touching the earth in reverence. *May the gods grant us victory, or may they welcome our spirits with open arms.*
Rising once more, Kaito turned to face his loyal retainer, Hiroshi. The younger man's eyes shone with a mix of admiration and trepidation, reflecting the weight of the impending battle. "Hiroshi," Kaito spoke softly, his voice low and even, "see to it that our men are well-rested and prepared. Tomorrow, we march into the jaws of war."
Hiroshi nodded solemnly, bowing before departing to carry out his lord's command. As the young retainer disappeared into the sea of tents, Kaito's thoughts drifted to the enemy they would soon face. The rival clan, led by the cunning Lord Ryota, had long sought to expand their dominion. This battle would decide the fate of their lands, their people, and their very way of life.
Kaito's hand instinctively went to the hilt of his katana, his fingers tracing the intricate designs etched into the metal. In the morning light, he would lead his men into the fray, ready to give his life for the sake of honor and duty. Tonight, however, he would find peace in the quiet contemplation of his own mortality.
The stars began to twinkle above, casting a celestial glow upon the silent warrior. Kaito Yamato, steadfast and unwavering, stood vigilant, awaiting the dawn that would bring either triumph or eternal rest.
**Prompt: Write me an opening scene for a romantic comedy where the heroine meets the love interest at a coffee shop.**
The aroma of freshly brewed coffee wafted through the cozy confines of "The Daily Grind," enticing passersby to step inside and indulge in a warm cup. Amidst the bustling morning crowd, Emily stood in line, her eyes fixed on the chalkboard menu above the counter. She was already running late for work, but a caffeine fix was non-negotiable.
As she pondered between a cappuccino and a latte, a gentle nudge against her elbow broke her concentration. Apologetic brown eyes met hers, belonging to a tall, handsome stranger who had accidentally bumped into her while reaching for his wallet. His chiseled features and charming smile made her heart skip a beat.
"I'm so sorry," he said, his deep voice laced with sincerity. "I wasn't watching where I was going."
Emily felt a blush creep up her cheeks as she smiled back. "No worries, I was lost in thought anyway." She gestured towards the menu. "Deciding between these two is harder than it looks."
The stranger chuckled, his eyes crinkling at the corners. "Well, if it helps, the barista here makes a mean cappuccino. But if you're feeling adventurous, go for the seasonal latte – it's a game-changer."
Their conversation flowed effortlessly, exchanging names and brief introductions as they waited in line together. By the time they reached the counter, Emily found herself wishing the queue was longer, just so she could prolong their encounter.
As they placed their orders, Emily couldn't help but notice how effortlessly the stranger charmed the barista, earning him a free pastry to accompany his drink. *Maybe there's more to this chance meeting than just a shared love for coffee,* she mused, her curiosity piqued.
With drinks in hand, they exchanged pleasantries once more before parting ways. As Emily hurried out the door, she caught one last glimpse of the mysterious man, now seated at a corner table, sipping his coffee and engrossed in a book. A flutter in her chest hinted that this might not be the last she'd see of him. And as she stepped out into the chilly autumn air, Emily couldn't shake off the feeling that her day – perhaps even her life – was about to take an unexpected turn.
**Prompt: Write me a scene from a horror film about the zombie apocalypse that takes place in an abandoned shopping mall. The story should feature somebody in the group getting bitten by a zombie.**
The dim fluorescent lights flickered overhead, casting eerie shadows across the empty corridors of the once bustling shopping mall. The air was thick with the stench of decay and death, mingling with the faint scent of stale popcorn wafting from the abandoned cinema.
Lena crouched beside a barricade of overturned tables and broken shelves, peering through a gap between two mannequins. Her heart raced as she watched a horde of shambling figures stumble through the food court, their moans echoing off the tile floors. She signaled back to her companions huddled in the shadows of a nearby storefront.
"About twenty of them," Lena whispered urgently. "We need to move, now."
Jax nodded grimly, his eyes darting around the makeshift fortification they'd constructed. "Alright, let's go. Stay close and stay quiet."
The small band of survivors crept silently down the corridor, avoiding the grasping hands of the undead. They had managed to find temporary refuge in the mall, scavenging what supplies they could from the stores. But the zombies kept coming, drawn by some unseen force to the hub of commerce.
As they turned a corner, a figure lunged from the darkness of a clothing store. Jax reacted swiftly, swinging his baseball bat with a sickening crunch against the creature's skull. The zombie crumpled, but not before its teeth sank deep into Jax's forearm.
"No!" Lena screamed, grabbing Jax and pulling him away from the snapping jaws. Blood dripped from the wound, pooling on the floor.
Jax's face paled, his eyes wide with shock. "I... I'm bit," he stammered, staring at the gash in his arm.
Lena ripped a strip of cloth from her shirt, binding the wound tightly. "We have to get you to safety," she said, trying to keep her voice steady. "There might still be time..."
But they both knew the truth. Once bitten, there was no cure. Jax would soon join the ranks of the undead, forever lost to the virus that ravaged the world.
With heavy hearts, they pressed onward, seeking a new sanctuary amidst the ruins of civilization. The moans grew louder, closer, as if the zombies sensed their fear and desperation. In this desolate landscape, survival hung by a thread, and even the strongest among them could fall prey to the relentless tide of the dead.
## Merge Details
### Merge Method
A whole lot of SLERPing. I've found that SLERP and Model Stock are the only two reliable methods for merging Llama 3. The other methods seem to frazzle the weights leading to broken blends.
This model was not the result of a thought-out process. I just kept mixing this and that until I got something that felt like the right balance of various factors.
The merge recipe for this model is ugly and I'm almost ashamed to show it, but it is what it is.
### Models Merged
The following models were included in the merge:
* [bosonai/Higgs-Llama-3-70B](https://huggingface.co/bosonai/Higgs-Llama-3-70B) - The nerd of the blend driving the car.
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1) - The manic pixie dream girl hanging out the window with her tongue out.
* [abacusai/Smaug-Llama-3-70B-Instruct-32K](https://huggingface.co/abacusai/Smaug-Llama-3-70B-Instruct-32K) - The vehicle by which the others are able to achieve tolerable highway speeds. (Some of the 8K version is in there too.)
* [turboderp/Cat-Llama-3-70B-instruct](https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct) - Like 20% of one of the intermediate models has this in there. It's just a cat, curled up in the back seat somewhere, yet its influence may be greater than we know?
### Configuration
The following [mergekit](https://github.com/arcee-ai/mergekit) YAML will reproduce this model via an iterated process of incestuous inbreeding. Your eyes will bleed. You have been warned.
```yaml
name: new-dawn-llama3-70b-v0.13.2
models:
- model: bosonai/Higgs-Llama-3-70B
- model: turboderp/Cat-Llama-3-70B-instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.2
dtype: float16
---
name: new-dawn-llama3-70b-v0.14
models:
- model: bosonai/Higgs-Llama-3-70B
- model: abacusai/Smaug-Llama-3-70B-Instruct
merge_method: slerp
base_model: bosonai/Higgs-Llama-3-70B
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.15
models:
- model: new-dawn-llama3-70b-v0.13.2
- model: new-dawn-llama3-70b-v0.14
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.13.2
parameters:
t:
- value: 0.5
dtype: float16
---
name: new-dawn-llama3-70b-v0.16
models:
- model: Sao10K/L3-70B-Euryale-v2.1
- model: new-dawn-llama3-70b-v0.15
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.15
parameters:
t:
- value: 0.4
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.16-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: new-dawn-llama3-70b-v0.16
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: _1-Smaug-bonsai-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: bosonai/Higgs-Llama-3-70B
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.6
dtype: float16
---
name: _2-Smaug-euryale-slerp
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
- model: Sao10K/L3-70B-Euryale-v2.1
merge_method: slerp
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
t:
- value: 0.65
dtype: float16
---
name: _3-Smaug-bonsai_Smaug-euryale-slerp
models:
- model: _1-Smaug-bonsai-slerp
- model: _2-Smaug-euryale-slerp
merge_method: slerp
base_model: _1-Smaug-bonsai-slerp
parameters:
t:
- value: 0.5
dtype: float16
---
# See https://huggingface.co/jukofyork/Dark-Miqu-70B/discussions/3
# Credit for merge recipe belongs to jukofyork
name: new-dawn-llama3-70b-v0.18-32K
merge_method: linear
models:
- model: abacusai/Smaug-Llama-3-70B-Instruct-32K
parameters:
weight:
- filter: v_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: o_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: up_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: gate_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- filter: down_proj
value: [1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1]
- value: 1
- model: _3-Smaug-bonsair_Smaug-euryale-slerp
parameters:
weight:
- filter: v_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: o_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: up_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: gate_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- filter: down_proj
value: [0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]
- value: 0
base_model: abacusai/Smaug-Llama-3-70B-Instruct-32K
tokenizer_source: base
dtype: float16
---
name: new-dawn-llama3-70b-32K-v1.0
models:
- model: new-dawn-llama3-70b-v0.16-32K
- model: new-dawn-llama3-70b-v0.18-32K
merge_method: slerp
base_model: new-dawn-llama3-70b-v0.16-32K
parameters:
t:
- value: 0.5
dtype: float16
```
|
SampleTheory/t5-finetuned
|
SampleTheory
| 2024-06-23T20:30:35Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-06-23T19:48:07Z |
---
base_model: t5-base
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-finetuned
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
blockblockblock/llama3-turbcat-instruct-8b-bpw2.5-exl2
|
blockblockblock
| 2024-06-23T20:23:49Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-23T20:22:15Z |
---
license: llama3
---
# Turbcat 8b






# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data

For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:

The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
DBangshu/Base_gemma_4_0
|
DBangshu
| 2024-06-23T20:23:32Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T20:20:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
psneto/phi-2-finetuned-gsm8k-one-token
|
psneto
| 2024-06-23T20:21:43Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-12T00:35:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alvdansen/wood-block-xl
|
alvdansen
| 2024-06-23T20:16:44Z | 347 | 11 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-06-23T20:15:40Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
A serene countryside with rolling hills, a farmer tending to a flock of
sheep, a child flying a kite, and a quaint farmhouse with a flower garden
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02434_.png
- text: >-
a girl with brown-blonde hair and big round glasses, blue eyes, white t
shirt, portrait
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02433_.png
- text: a wall of vines with a face coming out
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02423_.png
- text: >-
A sprawling labyrinth with hedges that shift and move, a minotaur lurking in
the shadows, a hero holding a glowing map, and enchanted torches lighting
the way
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02418_.png
- text: >-
A vibrant city park with people practicing yoga, children playing on a
jungle gym, a couple having a picnic, and an artist painting the scenery
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02417_.png
- text: >-
a girl wearing a yellow rain coat walking through paris on a rainy day,
portrait
parameters:
negative_prompt: ugly, messy, bad
output:
url: images/ComfyUI_02439_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: creativeml-openrail-m
---
# Wood Block XL
<Gallery />
## Model description
A style trained on public domain wood block prints from Japan.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alvdansen/wood-block-xl/tree/main) them in the Files & versions tab.
|
blockblockblock/llama3-turbcat-instruct-8b-bpw2.25-exl2
|
blockblockblock
| 2024-06-23T20:09:28Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2024-06-23T20:08:01Z |
---
license: llama3
---
# Turbcat 8b






# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data

For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:

The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
DBangshu/gemma_e5_6_0
|
DBangshu
| 2024-06-23T20:06:24Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T20:04:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexaAIDev/Octopus-v2-gguf-awq
|
NexaAIDev
| 2024-06-23T20:02:55Z | 1,217 | 7 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"function calling",
"on-device language model",
"android",
"conversational",
"en",
"base_model:google/gemma-2b",
"base_model:quantized:google/gemma-2b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-05-07T03:40:29Z |
---
license: cc-by-nc-4.0
base_model: google/gemma-2b
model-index:
- name: Octopus-V2-2B
results: []
tags:
- function calling
- on-device language model
- android
inference: false
space: false
spaces: false
language:
- en
---
# Quantized Octopus V2: On-device language model for super agent
This repo includes two types of quantized models: **GGUF** and **AWQ**, for our Octopus V2 model at [NexaAIDev/Octopus-v2](https://huggingface.co/NexaAIDev/Octopus-v2)
<p align="center" width="100%">
<a><img src="Octopus-logo.jpeg" alt="nexa-octopus" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>
# GGUF Qauntization
To run the models, please download them to your local machine using either git clone or [Hugging Face Hub](https://huggingface.co/docs/huggingface_hub/en/guides/download)
```
git clone https://huggingface.co/NexaAIDev/Octopus-v2-gguf-awq
```
## Run with [llama.cpp](https://github.com/ggerganov/llama.cpp) (Recommended)
1. **Clone and compile:**
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Compile the source code:
make
```
2. **Execute the Model:**
Run the following command in the terminal:
```bash
./main -m ./path/to/octopus-v2-Q4_K_M.gguf -n 256 -p "Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Take a selfie for me with front camera\n\nResponse:"
```
## Run with [Ollama](https://github.com/ollama/ollama)
Since our models have not been uploaded to the Ollama server, please download the models and manually import them into Ollama by following these steps:
1. Install Ollama on your local machine. You can also following the guide from [Ollama GitHub repository](https://github.com/ollama/ollama/blob/main/docs/import.md)
```bash
git clone https://github.com/ollama/ollama.git ollama
```
2. Locate the local Ollama directory:
```bash
cd ollama
```
3. Create a `Modelfile` in your directory
```bash
touch Modelfile
```
4. In the Modelfile, include a `FROM` statement with the path to your local model, and the default parameters:
```bash
FROM ./path/to/octopus-v2-Q4_K_M.gguf
```
5. Use the following command to add the model to Ollama:
```bash
ollama create octopus-v2-Q4_K_M -f Modelfile
```
6. Verify that the model has been successfully imported:
```bash
ollama ls
```
7. Run the mode
```bash
ollama run octopus-v2-Q4_K_M "Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Take a selfie for me with front camera\n\nResponse:"
```
# AWQ Quantization
Python example:
```python
from transformers import AutoTokenizer
from awq import AutoAWQForCausalLM
import torch
import time
import numpy as np
def inference(input_text):
start_time = time.time()
input_ids = tokenizer(input_text, return_tensors="pt").to('cuda')
input_length = input_ids["input_ids"].shape[1]
generation_output = model.generate(
input_ids["input_ids"],
do_sample=False,
max_length=1024
)
end_time = time.time()
# Decode only the generated part
generated_sequence = generation_output[:, input_length:].tolist()
res = tokenizer.decode(generated_sequence[0])
latency = end_time - start_time
num_output_tokens = len(generated_sequence[0])
throughput = num_output_tokens / latency
return {"output": res, "latency": latency, "throughput": throughput}
# Initialize tokenizer and model
model_id = "/path/to/Octopus-v2-AWQ-NexaAIDev"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=False)
model = AutoAWQForCausalLM.from_quantized(model_id, fuse_layers=True,
trust_remote_code=False, safetensors=True)
prompts = ["Below is the query from the users, please call the correct function and generate the parameters to call the function.\n\nQuery: Can you take a photo using the back camera and save it to the default location? \n\nResponse:"]
avg_throughput = []
for prompt in prompts:
out = inference(prompt)
avg_throughput.append(out["throughput"])
print("nexa model result:\n", out["output"])
print("avg throughput:", np.mean(avg_throughput))
```
# Quantized GGUF & AWQ Models Benchmark
| Name | Quant method | Bits | Size | Response (t/s) | Use Cases |
| ---------------------- | ------------ | ---- | -------- | -------------- | ----------------------------------- |
| Octopus-v2-AWQ | AWQ | 4 | 3.00 GB | 63.83 | fast, high quality, recommended |
| Octopus-v2-Q2_K.gguf | Q2_K | 2 | 1.16 GB | 57.81 | fast but high loss, not recommended |
| Octopus-v2-Q3_K.gguf | Q3_K | 3 | 1.38 GB | 57.81 | extremely not recommended |
| Octopus-v2-Q3_K_S.gguf | Q3_K_S | 3 | 1.19 GB | 52.13 | extremely not recommended |
| Octopus-v2-Q3_K_M.gguf | Q3_K_M | 3 | 1.38 GB | 58.67 | moderate loss, not very recommended |
| Octopus-v2-Q3_K_L.gguf | Q3_K_L | 3 | 1.47 GB | 56.92 | not very recommended |
| Octopus-v2-Q4_0.gguf | Q4_0 | 4 | 1.55 GB | 68.80 | moderate speed, recommended |
| Octopus-v2-Q4_1.gguf | Q4_1 | 4 | 1.68 GB | 68.09 | moderate speed, recommended |
| Octopus-v2-Q4_K.gguf | Q4_K | 4 | 1.63 GB | 64.70 | moderate speed, recommended |
| Octopus-v2-Q4_K_S.gguf | Q4_K_S | 4 | 1.56 GB | 62.16 | fast and accurate, very recommended |
| Octopus-v2-Q4_K_M.gguf | Q4_K_M | 4 | 1.63 GB | 64.74 | fast, recommended |
| Octopus-v2-Q5_0.gguf | Q5_0 | 5 | 1.80 GB | 64.80 | fast, recommended |
| Octopus-v2-Q5_1.gguf | Q5_1 | 5 | 1.92 GB | 63.42 | very big, prefer Q4 |
| Octopus-v2-Q5_K.gguf | Q5_K | 5 | 1.84 GB | 61.28 | big, recommended |
| Octopus-v2-Q5_K_S.gguf | Q5_K_S | 5 | 1.80 GB | 62.16 | big, recommended |
| Octopus-v2-Q5_K_M.gguf | Q5_K_M | 5 | 1.71 GB | 61.54 | big, recommended |
| Octopus-v2-Q6_K.gguf | Q6_K | 6 | 2.06 GB | 55.94 | very big, not very recommended |
| Octopus-v2-Q8_0.gguf | Q8_0 | 8 | 2.67 GB | 56.35 | very big, not very recommended |
| Octopus-v2-f16.gguf | f16 | 16 | 5.02 GB | 36.27 | extremely big |
| Octopus-v2.gguf | | | 10.00 GB | | |
_Quantized with llama.cpp_
**Acknowledgement**:
We sincerely thank our community members, [Mingyuan](https://huggingface.co/ThunderBeee), [Zoey](https://huggingface.co/ZY6), [Brian](https://huggingface.co/JoyboyBrian), [Perry](https://huggingface.co/PerryCheng614), [Qi](https://huggingface.co/qiqiWav), [David](https://huggingface.co/Davidqian123) for their extraordinary contributions to this quantization effort.
|
Sapien1000B/gemma_2b_sql_gguf
|
Sapien1000B
| 2024-06-23T20:01:32Z | 26 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:quantized:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-06-23T19:58:06Z |
---
base_model: unsloth/gemma-2b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- gguf
---
# Uploaded model
- **Developed by:** Sapien1000B
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Augusto777/swinv2-finetuned-ve-Ub200
|
Augusto777
| 2024-06-23T19:59:32Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"swinv2",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swinv2-tiny-patch4-window8-256",
"base_model:finetune:microsoft/swinv2-tiny-patch4-window8-256",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-06-23T19:57:00Z |
---
license: apache-2.0
base_model: microsoft/swinv2-tiny-patch4-window8-256
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swinv2-finetuned-ve-Ub200
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.47058823529411764
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2-finetuned-ve-Ub200
This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5977
- Accuracy: 0.4706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.92 | 6 | 7.9891 | 0.0980 |
| No log | 2.0 | 13 | 7.4848 | 0.0980 |
| No log | 2.92 | 19 | 6.2378 | 0.0980 |
| No log | 4.0 | 26 | 4.8900 | 0.0980 |
| No log | 4.92 | 32 | 3.8155 | 0.0980 |
| No log | 6.0 | 39 | 2.7342 | 0.0980 |
| No log | 6.92 | 45 | 2.0612 | 0.0980 |
| No log | 8.0 | 52 | 1.5977 | 0.4706 |
| No log | 8.92 | 58 | 1.3671 | 0.4706 |
| No log | 10.0 | 65 | 1.2122 | 0.4706 |
| No log | 10.92 | 71 | 1.1823 | 0.4706 |
| No log | 12.0 | 78 | 1.1835 | 0.4706 |
| No log | 12.92 | 84 | 1.1838 | 0.4706 |
| No log | 14.0 | 91 | 1.1778 | 0.4706 |
| No log | 14.92 | 97 | 1.1769 | 0.4706 |
| 3.2267 | 16.0 | 104 | 1.1762 | 0.4706 |
| 3.2267 | 16.92 | 110 | 1.1758 | 0.4706 |
| 3.2267 | 18.0 | 117 | 1.1770 | 0.4706 |
| 3.2267 | 18.46 | 120 | 1.1771 | 0.4706 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mradermacher/Oumuamua-7b-RP-GGUF
|
mradermacher
| 2024-06-23T19:57:05Z | 110 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"ja",
"dataset:Aratako/Rosebleu-1on1-Dialogues",
"dataset:Aratako/LimaRP-augmented-ja-karakuri",
"dataset:Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja",
"dataset:grimulkan/LimaRP-augmented",
"dataset:SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed",
"dataset:OmniAICreator/Japanese-Roleplay",
"dataset:OmniAICreator/Japanese-Roleplay-Dialogues",
"base_model:Aratako/Oumuamua-7b-RP",
"base_model:quantized:Aratako/Oumuamua-7b-RP",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-06-23T19:31:19Z |
---
base_model: Aratako/Oumuamua-7b-RP
datasets:
- Aratako/Rosebleu-1on1-Dialogues
- Aratako/LimaRP-augmented-ja-karakuri
- Aratako/Bluemoon_Top50MB_Sorted_Fixed_ja
- grimulkan/LimaRP-augmented
- SicariusSicariiStuff/Bluemoon_Top50MB_Sorted_Fixed
- OmniAICreator/Japanese-Roleplay
- OmniAICreator/Japanese-Roleplay-Dialogues
language:
- ja
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Aratako/Oumuamua-7b-RP
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.IQ3_XS.gguf) | IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Oumuamua-7b-RP-GGUF/resolve/main/Oumuamua-7b-RP.f16.gguf) | f16 | 14.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
blockblockblock/llama3-turbcat-instruct-8b-bpw3-exl2
|
blockblockblock
| 2024-06-23T19:55:40Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-06-23T19:53:54Z |
---
license: llama3
---
# Turbcat 8b






# Release notes
This is a direct upgrade over cat 70B, with 2x the dataset size(2GB-> 5GB), added Chinese support with quality on par with the original English dataset.
The medical COT portion of the dataset has been sponsored by steelskull, and the action packed character play portion was donated by Gryphe's(aesir dataset). Note that 8b is based on llama3 with limited Chinese support due to base model choice. The chat format in 8b is llama3. The 72b has more comprehensive Chinese support and the format will be chatml.
# Data Generation
In addition to the specified fortifications above, the data generation process is largely the same. Except for added Chinese Ph. D. Entrance exam, Traditional Chinese and Chinese story telling data.
## Special Highlights
* 20 postdocs (10 Chinese, 10 English speaking doctors specialized in computational biology, biomed, biophysics and biochemistry)participated in the annotation process.
* GRE and MCAT/Kaoyan questions were manually answered by the participants using strictly COT and BERT judges producing embeddings were trained based on the provided annotation. For an example of BERT embedding visualization and scoring, please refer to https://huggingface.co/turboderp/Cat-Llama-3-70B-instruct
* Initial support of roleplay as api usage. When roleplaying as an API or function, the model does not produce irrelevant content that's not specified by the system prompt.
# Task coverage
## Chinese tasks on par with English data

For the Chinese portion of the dataset, we strictly kept its distribution and quality comparable to the English counterpart, as visualized by the close distance of the doublets. The overall QC is visualized by PCA after bert embedding
## Individual tasks Quality Checked by doctors
For each cluster, we QC using BERT embeddings on an umap:

The outliers have been manually checked by doctors.
# Thirdparty dataset
Thanks to the following people for their tremendous support for dataset generation:
* steelskull for the medical COT dataset with gpt4o
* Gryphe for the wonderful action packed dataset
* Turbca for being turbca
# Prompt format for 8b:
**llama3**
Example raw prompt:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
CatGPT really likes its new cat ears and ends every message with Nyan_<|eot_id|><|start_header_id|>user<|end_header_id|>
CatA: pats CatGPT cat ears<|eot_id|><|start_header_id|>assistant<|end_header_id|>
CatGPT:
```
# Prompt format for 72b:
**chatml**
Example raw prompt:
```
<|im_start|>system
CatGPT really likes its new cat ears and ends every message with Nyan_<|im_end|>
<|im_start|>user
CatA: pats CatGPT cat ears<|im_end|>
<|im_start|>assistant
CatGPT:
```
# Support
Please join https://discord.gg/DwGz54Mz for model support
|
atsizelti/turkish_org_classifier_16k
|
atsizelti
| 2024-06-23T19:54:42Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"turkish",
"tr",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-30T04:21:43Z |
---
language: "tr"
tags:
- "bert"
- "turkish"
- "text-classification"
license: "apache-2.0"
datasets:
- "custom"
metrics:
- "precision"
- "recall"
- "f1"
- "accuracy"
---
# BERT-based Organization Detection Model for Turkish Texts
## Model Description
This model is fine-tuned on the `dbmdz/bert-base-turkish-uncased` architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.
## Model Architecture
- **Base Model:** BERT (dbmdz/bert-base-turkish-uncased)
- **Training Data:** Twitter data from 4,000 random accounts and 12,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4. Positives are checked by hand.
## Training Setup
- **Tokenization:** Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.
- **Dataset Split:** 80% training, 20% validation.
- **Training Parameters:**
- Epochs: 3
- Training batch size: 8
- Evaluation batch size: 16
- Warmup steps: 500
- Weight decay: 0.01
## Hyperparameter Tuning
Performed using Optuna, achieving best settings:
- **Learning rate:** 3.2167327348605205e-05
- **Batch size:** 16
- **Epochs:** 1
## Evaluation Metrics
- **Precision on Validation Set:** 0.76 (organization class)
- **Recall on Validation Set:** 0.84 (organization class)
- **F1-Score (Macro Average):** 0.79
- **Accuracy:** 0.95
- **Confusion Matrix on Validation Set:**
```
[[2715, 103],
[ 60, 322]]
```
- **Hand-coded Sample of 1000 Accounts:**
- **Precision:** 0.89
- **Recall:** 0.89
- **F1-Score (Macro Average):** 0.89
- **Confusion Matrix:**
```
[[936, 3],
[ 4, 31]]
```
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("atsizelti/atsizelti/turkish_org_classifier_16k")
tokenizer = AutoTokenizer.from_pretrained("atsizelti/atsizelti/turkish_org_classifier_16k")
text = "Örnek metin buraya girilir."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)
```
|
atsizelti/turkish_org_classifier
|
atsizelti
| 2024-06-23T19:46:12Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"turkish",
"tr",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-04-23T23:00:16Z |
---
language: "tr"
tags:
- "bert"
- "turkish"
- "text-classification"
license: "apache-2.0"
datasets:
- "custom"
metrics:
- "precision"
- "recall"
- "f1"
- "accuracy"
---
# BERT-based Organization Detection Model for Turkish Texts
## Model Description
This model is fine-tuned on the `dbmdz/bert-base-turkish-uncased` architecture for detecting organization accounts within Turkish Twitter. This initiative is part of the Politus Project's efforts to analyze organizational presence in social media data.
## Model Architecture
- **Base Model:** BERT (dbmdz/bert-base-turkish-uncased)
- **Training Data:** Twitter data from 4,000 random accounts and 4,000 accounts with high organization-related activity as determined by m3inference scores above 0.7, 8,000 accounts in total. The data was annotated based on user names, screen names, and descriptions using ChatGPT 4.
## Training Setup
- **Tokenization:** Used Hugging Face's AutoTokenizer, padding sequences to a maximum length of 128 tokens.
- **Dataset Split:** 80% training, 20% validation.
- **Training Parameters:**
- Epochs: 3
- Training batch size: 8
- Evaluation batch size: 16
- Warmup steps: 500
- Weight decay: 0.01
## Hyperparameter Tuning
Performed using Optuna, achieving best settings:
- **Learning rate:** 1.84e-05
- **Batch size:** 16
- **Epochs:** 3
## Evaluation Metrics
- **Precision on Validation Set:** 0.67 (organization class)
- **Recall on Validation Set:** 0.81 (organization class)
- **F1-Score (Macro Average):** 0.73
- **Accuracy:** 0.94
- **Confusion Matrix on Validation Set:**
```
[[1390, 60],
[ 28, 122]]
```
- **Hand-coded Sample of 1000 Accounts:**
- **Precision:** 0.89
- **Recall:** 0.89
- **F1-Score (Macro Average):** 0.89
- **Confusion Matrix:**
```
[[935, 4],
[ 4, 31]]
```
## How to Use
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("atsizelti/turkish_org_classifier")
tokenizer = AutoTokenizer.from_pretrained("atsizelti/turkish_org_classifier")
text = "Örnek metin buraya girilir."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
predictions = outputs.logits.argmax(-1)
```
|
psneto/phi-2-finetuned-gsm8k-all-tokens
|
psneto
| 2024-06-23T19:40:16Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-11T00:51:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tsavage68/Summary_L3_150steps_1e7rate_05beta_CSFTDPO
|
tsavage68
| 2024-06-23T19:39:50Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"base_model:finetune:tsavage68/Summary_L3_1000steps_1e7rate_SFT2",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-06-23T19:31:07Z |
---
license: llama3
base_model: tsavage68/Summary_L3_1000steps_1e7rate_SFT2
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Summary_L3_150steps_1e7rate_05beta_CSFTDPO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Summary_L3_150steps_1e7rate_05beta_CSFTDPO
This model is a fine-tuned version of [tsavage68/Summary_L3_1000steps_1e7rate_SFT2](https://huggingface.co/tsavage68/Summary_L3_1000steps_1e7rate_SFT2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6098
- Rewards/chosen: 0.0340
- Rewards/rejected: -0.3093
- Rewards/accuracies: 0.1400
- Rewards/margins: 0.3433
- Logps/rejected: -15.8824
- Logps/chosen: -9.3149
- Logits/rejected: -1.0943
- Logits/chosen: -1.0959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6835 | 0.2004 | 50 | 0.6724 | 0.0066 | -0.0411 | 0.1350 | 0.0477 | -15.3460 | -9.3696 | -1.0959 | -1.0974 |
| 0.6728 | 0.4008 | 100 | 0.6273 | 0.0168 | -0.1873 | 0.1400 | 0.2041 | -15.6383 | -9.3492 | -1.0942 | -1.0958 |
| 0.6326 | 0.6012 | 150 | 0.6098 | 0.0340 | -0.3093 | 0.1400 | 0.3433 | -15.8824 | -9.3149 | -1.0943 | -1.0959 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.