modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
amusktweewt/detect_car4-yolos
|
amusktweewt
| 2024-01-25T16:54:05Z | 1 | 0 |
transformers
|
[
"transformers",
"yolos",
"object-detection",
"license:mit",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-01-23T19:13:25Z |
---
license: mit
pipeline_tag: object-detection
---
|
LazarusNLP/all-indobert-base-p2
|
LazarusNLP
| 2024-01-25T16:52:54Z | 50 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ind",
"dataset:indonli",
"dataset:indolem/indo_story_cloze",
"dataset:unicamp-dl/mmarco",
"dataset:miracl/miracl",
"dataset:SEACrowd/wrete",
"dataset:SEACrowd/indolem_ntp",
"dataset:khalidalt/tydiqa-goldp",
"dataset:SEACrowd/facqa",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-25T16:36:04Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- indonli
- indolem/indo_story_cloze
- unicamp-dl/mmarco
- miracl/miracl
- SEACrowd/wrete
- SEACrowd/indolem_ntp
- khalidalt/tydiqa-goldp
- SEACrowd/facqa
language:
- ind
---
# LazarusNLP/all-indobert-base-p2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LazarusNLP/all-indobert-base-p2')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/all-indobert-base-p2')
model = AutoModel.from_pretrained('LazarusNLP/all-indobert-base-p2')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/all-indobert-base-p2)
## Training
The model was trained with the parameters:
**DataLoader**:
`MultiDatasetDataLoader.MultiDatasetDataLoader` of length 352 with parameters:
```
{'batch_size_pairs': 384, 'batch_size_triplets': 256}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 36,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
yasithheshan/llama2-13b-qlora-finetunined-4-bit-grade-dataset-v1
|
yasithheshan
| 2024-01-25T16:48:44Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-25T16:48:34Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
asun17904/imdb-bert-base-uncased-a2b2
|
asun17904
| 2024-01-25T16:43:57Z | 1 | 0 |
pytorch
|
[
"pytorch",
"bert",
"en",
"license:mit",
"region:us"
] | null | 2024-01-25T04:27:32Z |
---
language: en
license: mit
library_name: pytorch
---
# Knowledge Continuity Regularized Network
Trainer Hyperparameters:
- `lr` = 5e-05
- `per_device_batch_size` = 16
- `gradient_accumulation_steps` = 1
- `weight_decay` = 1e-09
- `seed` = 42
Regularization Hyperparameters
- `numerical stability denominator constant` = 0.01
- `lambda` = 0.01
- `alpha` = 2.0
- `beta` = 2.0
Extended Logs:
|eval_loss|eval_accuracy|epoch|
|--|--|--|
|7.262|0.860|1.0|
|6.747|0.892|2.0|
|7.280|0.859|3.0|
|7.021|0.875|4.0|
|7.106|0.868|5.0|
|7.054|0.873|6.0|
|7.037|0.874|7.0|
|7.054|0.872|8.0|
|7.084|0.871|9.0|
|7.320|0.857|10.0|
|7.094|0.871|11.0|
|7.046|0.874|12.0|
|7.073|0.872|13.0|
|7.151|0.867|14.0|
|7.054|0.874|15.0|
|7.125|0.869|16.0|
|7.034|0.875|17.0|
|7.069|0.872|18.0|
|7.010|0.876|19.0|
|7.048|0.874|20.0|
|6.984|0.878|21.0|
|7.022|0.875|22.0|
|7.020|0.874|23.0|
|7.099|0.871|24.0|
|7.102|0.870|25.0|
|7.797|0.828|26.0|
|7.066|0.873|27.0|
|7.046|0.874|28.0|
|7.104|0.870|29.0|
|7.118|0.870|30.0|
|7.270|0.860|31.0|
|7.103|0.870|32.0|
|7.087|0.872|33.0|
|7.103|0.871|34.0|
|7.941|0.817|35.0|
|7.066|0.873|36.0|
|7.140|0.869|37.0|
|7.055|0.873|38.0|
|7.151|0.868|39.0|
|7.231|0.863|40.0|
|7.158|0.868|41.0|
|7.073|0.873|42.0|
|7.087|0.871|43.0|
|7.108|0.871|44.0|
|7.053|0.873|45.0|
|7.057|0.873|46.0|
|7.075|0.873|47.0|
|7.069|0.873|48.0|
|7.097|0.871|49.0|
|
LN1996/peft-qlora-run2
|
LN1996
| 2024-01-25T16:37:52Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"region:us"
] | null | 2024-01-25T16:37:20Z |
---
library_name: peft
base_model: microsoft/phi-2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
AbhiPemmaraju/gpt_model
|
AbhiPemmaraju
| 2024-01-25T16:36:48Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T16:36:07Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: gpt_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8578
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.63 | 100 | 3.9939 |
| No log | 1.26 | 200 | 3.8929 |
| No log | 1.89 | 300 | 3.8522 |
| No log | 2.52 | 400 | 3.8505 |
| 4.4375 | 3.14 | 500 | 3.8600 |
| 4.4375 | 3.77 | 600 | 3.8487 |
| 4.4375 | 4.4 | 700 | 3.8578 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Alex-dc/q-Taxi-v3
|
Alex-dc
| 2024-01-25T16:33:55Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T16:33:53Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Alex-dc/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Hector001/emotion-vit-model-hector
|
Hector001
| 2024-01-25T16:30:07Z | 177 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T16:06:39Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: emotion-vit-model-hector
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-vit-model-hector
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Fynd/test_llamav2_13b_2_ep_intent
|
Fynd
| 2024-01-25T16:25:59Z | 2 | 0 |
peft
|
[
"peft",
"safetensors",
"region:us"
] | null | 2024-01-25T11:17:31Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0
|
nocab7714/BRA7
|
nocab7714
| 2024-01-25T16:12:47Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2024-01-25T16:05:16Z |
---
license: other
license_name: other
license_link: LICENSE
---
|
NBA55/llama2-7b-qlora-finetunined-4-bit-grade-dataset-v1
|
NBA55
| 2024-01-25T16:04:22Z | 1 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2024-01-25T16:04:13Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
Vakatt/dqn-SpaceInvadersNoFrameskip-v4
|
Vakatt
| 2024-01-25T15:58:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T15:57:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 559.50 +/- 97.38
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vakatt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Vakatt -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Vakatt
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
SharonTudi/DIALOGUE_two
|
SharonTudi
| 2024-01-25T15:55:30Z | 94 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-cased",
"base_model:finetune:distilbert/distilbert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T15:51:32Z |
---
license: apache-2.0
base_model: distilbert-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: DIALOGUE_two
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DIALOGUE_two
This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3804
- Precision: 0.9762
- Recall: 0.9737
- F1: 0.9736
- Accuracy: 0.9737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.3711 | 0.31 | 15 | 1.3213 | 0.5848 | 0.5132 | 0.4588 | 0.5132 |
| 1.2829 | 0.62 | 30 | 1.1830 | 0.7679 | 0.7237 | 0.7027 | 0.7237 |
| 1.1039 | 0.94 | 45 | 0.9695 | 0.8939 | 0.8158 | 0.8094 | 0.8158 |
| 0.9122 | 1.25 | 60 | 0.7720 | 0.9499 | 0.9474 | 0.9473 | 0.9474 |
| 0.7581 | 1.56 | 75 | 0.6220 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.6483 | 1.88 | 90 | 0.5096 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.5277 | 2.19 | 105 | 0.4330 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.4708 | 2.5 | 120 | 0.4003 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
| 0.4622 | 2.81 | 135 | 0.3804 | 0.9762 | 0.9737 | 0.9736 | 0.9737 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
retdop/Taxi-v3
|
retdop
| 2024-01-25T15:44:29Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T15:44:26Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="retdop/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
noamrot/FuseCap_Image_Captioning
|
noamrot
| 2024-01-25T15:42:28Z | 2,847 | 21 |
transformers
|
[
"transformers",
"pytorch",
"blip",
"image-text-to-text",
"image-captioning",
"image-to-text",
"arxiv:2305.17718",
"license:mit",
"region:us"
] |
image-to-text
| 2023-05-31T07:04:57Z |
---
license: mit
inference: false
pipeline_tag: image-to-text
tags:
- image-captioning
---
# FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions
A framework designed to generate semantically rich image captions.
## Resources
- 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/).
- 📝 **Read the Paper**: You can find the paper [here](https://arxiv.org/abs/2305.17718).
- 🚀 **Demo**: Try out our BLIP-based model [demo](https://huggingface.co/spaces/noamrot/FuseCap) trained using FuseCap.
- 📂 **Code Repository**: The code for FuseCap can be found in the [GitHub repository](https://github.com/RotsteinNoam/FuseCap).
- 🗃️ **Datasets**: The fused captions datasets can be accessed from [here](https://github.com/RotsteinNoam/FuseCap#datasets).
#### Running the model
Our BLIP-based model can be run using the following code,
```python
import requests
from PIL import Image
from transformers import BlipProcessor, BlipForConditionalGeneration
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
processor = BlipProcessor.from_pretrained("noamrot/FuseCap")
model = BlipForConditionalGeneration.from_pretrained("noamrot/FuseCap").to(device)
img_url = 'https://huggingface.co/spaces/noamrot/FuseCap/resolve/main/bike.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
text = "a picture of "
inputs = processor(raw_image, text, return_tensors="pt").to(device)
out = model.generate(**inputs, num_beams = 3)
print(processor.decode(out[0], skip_special_tokens=True))
```
## Upcoming Updates
The official codebase, datasets and trained models for this project will be released soon.
## BibTeX
``` Citation
@inproceedings{rotstein2024fusecap,
title={Fusecap: Leveraging large language models for enriched fused image captions},
author={Rotstein, Noam and Bensa{\"\i}d, David and Brody, Shaked and Ganz, Roy and Kimmel, Ron},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={5689--5700},
year={2024}
}
```
|
daxiongshu/Pluto_24B_DPO_63
|
daxiongshu
| 2024-01-25T15:32:37Z | 49 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T03:32:36Z |
---
license: apache-2.0
---
# Model Card for Pluto_24B_DPO_63
<!-- Provide a quick summary of what the model is/does. -->
This model finetunes model `cloudyu/Mixtral_7Bx2_MoE` on the `jondurbin/truthy-dpo-v0.1` dataset for 1 epoch
## Model Details
### Model Description
|
TieIncred/xlm-roberta-base-finetuned-panx-de
|
TieIncred
| 2024-01-25T15:25:03Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-23T18:04:37Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8655529037390612
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2642 | 1.0 | 525 | 0.1590 | 0.8215 |
| 0.1275 | 2.0 | 1050 | 0.1347 | 0.8517 |
| 0.0824 | 3.0 | 1575 | 0.1365 | 0.8656 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.15.0
|
itsdhanoob/cart_pole_policy_gradient
|
itsdhanoob
| 2024-01-25T15:22:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T15:22:37Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cart_pole_policy_gradient
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
rishabh0000/mistral_7b_rishabh
|
rishabh0000
| 2024-01-25T15:19:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T15:19:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Qwen/Qwen-VL-Chat
|
Qwen
| 2024-01-25T15:18:14Z | 30,208 | 350 |
transformers
|
[
"transformers",
"pytorch",
"qwen",
"text-generation",
"custom_code",
"zh",
"en",
"arxiv:2308.12966",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-08-20T04:45:22Z |
---
language:
- zh
- en
tags:
- qwen
pipeline_tag: text-generation
inference: false
---
# Qwen-VL-Chat
<br>
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/logo_vl.jpg" width="400"/>
<p>
<br>
<p align="center">
Qwen-VL
<a href="https://huggingface.co/Qwen/Qwen-VL">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL/summary">🤖</a>  |
Qwen-VL-Chat
<a href="https://huggingface.co/Qwen/Qwen-VL-Chat">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat/summary">🤖</a> 
(Int4:
<a href="https://huggingface.co/Qwen/Qwen-VL-Chat-Int4">🤗</a>
<a href="https://modelscope.cn/models/qwen/Qwen-VL-Chat-Int4/summary">🤖</a> ) |
Qwen-VL-Plus
<a href="https://huggingface.co/spaces/Qwen/Qwen-VL-Plus">🤗</a>
<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Chat-Demo/summary">🤖</a>  |
Qwen-VL-Max
<a href="https://huggingface.co/spaces/Qwen/Qwen-VL-Max">🤗</a>
<a href="https://modelscope.cn/studios/qwen/Qwen-VL-Max/summary">🤖</a> 
<br>
<a href="https://tongyi.aliyun.com/qianwen">Web</a>   |   
<a href="https://help.aliyun.com/zh/dashscope/developer-reference/vl-plus-quick-start">API</a>   |   
<a href="assets/wechat.png">WeChat</a>   |   
<a href="https://discord.gg/z3GAxXZ9Ce">Discord</a>   |   
<a href="https://arxiv.org/abs/2308.12966">Paper</a>   |   
<a href="TUTORIAL.md">Tutorial</a>
</p>
<br>
**Qwen-VL** 是阿里云研发的大规模视觉语言模型(Large Vision Language Model, LVLM)。Qwen-VL 可以以图像、文本、检测框作为输入,并以文本和检测框作为输出。Qwen-VL 系列模型性能强大,具备多语言对话、多图交错对话等能力,并支持中文开放域定位和细粒度图像识别与理解。
**Qwen-VL** (Qwen Large Vision Language Model) is the visual multimodal version of the large model series, Qwen (abbr. Tongyi Qianwen), proposed by Alibaba Cloud. Qwen-VL accepts image, text, and bounding box as inputs, outputs text and bounding box. The features of Qwen-VL include:
目前,我们提供了Qwen-VL和Qwen-VL-Chat两个模型,分别为预训练模型和Chat模型。如果想了解更多关于模型的信息,请点击[链接](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md)查看我们的技术备忘录。本仓库为Qwen-VL-Chat仓库。
We release Qwen-VL and Qwen-VL-Chat, which are pretrained model and Chat model respectively. For more details about Qwen-VL, please refer to our [technical memo](https://github.com/QwenLM/Qwen-VL/blob/master/visual_memo.md). This repo is the one for Qwen-VL-Chat.
<br>
## 安装要求 (Requirements)
* python 3.8及以上版本
* pytorch 1.12及以上版本,推荐2.0及以上版本
* 建议使用CUDA 11.4及以上(GPU用户需考虑此选项)
* python 3.8 and above
* pytorch 1.12 and above, 2.0 and above are recommended
* CUDA 11.4 and above are recommended (this is for GPU users)
<br>
## 快速开始 (Quickstart)
我们提供简单的示例来说明如何利用 🤗 Transformers 快速使用Qwen-VL-Chat。
在开始前,请确保你已经配置好环境并安装好相关的代码包。最重要的是,确保你满足上述要求,然后安装相关的依赖库。
Below, we provide simple examples to show how to use Qwen-VL-Chat with 🤗 Transformers.
Before running the code, make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
```
接下来你可以开始使用Transformers来使用我们的模型。关于视觉模块的更多用法,请参考[教程](TUTORIAL.md)。
Now you can start with Transformers. More usage aboue vision encoder, please refer to [tutorial](TUTORIAL_zh.md).
#### 🤗 Transformers
To use Qwen-VL-Chat for the inference, all you need to do is to input a few lines of codes as demonstrated below. However, **please make sure that you are using the latest code.**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig
import torch
torch.manual_seed(1234)
# Note: The default behavior now has injection attack prevention off.
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
# use bf16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cpu", trust_remote_code=True).eval()
# use cuda device
model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen-VL-Chat", device_map="cuda", trust_remote_code=True).eval()
# Specify hyperparameters for generation (No need to do this if you are using transformers>=4.32.0)
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-VL-Chat", trust_remote_code=True)
# 1st dialogue turn
query = tokenizer.from_list_format([
{'image': 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'},
{'text': '这是什么'},
])
response, history = model.chat(tokenizer, query=query, history=None)
print(response)
# 图中是一名年轻女子在沙滩上和她的狗玩耍,狗的品种可能是拉布拉多。她们坐在沙滩上,狗的前腿抬起来,似乎在和人类击掌。两人之间充满了信任和爱。
# 2nd dialogue turn
response, history = model.chat(tokenizer, '输出"击掌"的检测框', history=history)
print(response)
# <ref>击掌</ref><box>(517,508),(589,611)</box>
image = tokenizer.draw_bbox_on_latest_picture(response, history)
if image:
image.save('1.jpg')
else:
print("no box")
```
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo_highfive.jpg" width="500"/>
<p>
<br>
## 量化 (Quantization)
### 用法 (Usage)
当前我们提供了基于[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)的量化方案,并提供了Qwen-VL-Chat的Int4量化版本Qwen-VL-Chat-Int4 [点击此处](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4)。该模型在效果评测上几乎无损,并在显存占用和推理速度上具有明显优势。
下文说明如何使用该量化模型。开始之前,请确保你满足要求(如torch2.0及以上、transformers 4.32.0及以上,等)并安装所需的代码库:
We provide a new solution based on [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), and release an Int4 quantized model for Qwen-VL-Chat, Qwen-VL-Chat-Int4 [Click here](https://huggingface.co/Qwen/Qwen-VL-Chat-Int4), which achieves nearly lossless model effects but improved performance on both memory costs and inference speed.
Here we demonstrate how to use our provided quantized models for inference. Before you start, make sure you meet the requirements (e.g., torch 2.0 and above, transformers 4.32.0 and above, etc.) and install the required packages:
```bash
pip install optimum
git clone https://github.com/JustinLin610/AutoGPTQ.git & cd AutoGPTQ
pip install -v .
```
如遇到安装 `auto-gptq` 的问题,建议您前往官方[repo](https://github.com/PanQiWei/AutoGPTQ) 寻找合适的wheel。
随后你便可以按照上述用法,轻松调用量化模型:
If you meet problems installing `auto-gptq`, we advise you to check out the official [repo](https://github.com/PanQiWei/AutoGPTQ) to find a wheel.
Then you can load the quantized model easily and run inference as same as usual:
```python
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen-VL-Chat-Int4",
device_map="auto",
trust_remote_code=True
).eval()
# Either a local path or an u[](https://)rl between <img></img> tags.
image_path = 'https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg'
response, history = model.chat(tokenizer, query=f'<img>{image_path}</img>这是什么', history=None)
print(response)
```
### 效果评测 (Performance)
我们列出不同精度下模型在评测基准 **[TouchStone](https://github.com/OFA-Sys/TouchStone)** 上的表现,并发现量化模型并没有显著性能损失。结果如下所示:
We illustrate the model performance of both BF16 and Int4 models on the benchmark **[TouchStone](https://github.com/OFA-Sys/TouchStone)**, and we find that the quantized model does not suffer from significant performance degradation. Results are shown below:
| Quantization | ZH. | EN |
| ------------ | :--------: | :-----------: |
| BF16 | 401.2 | 645.2 |
| Int4 | 386.6 | 651.4 |
### 推理速度 (Inference Speed)
我们测算了在输入一张图片(即258个token)的条件下BF16和Int4的模型生成1792 (2048-258) 和 7934 (8192-258) 个token的平均速度。
We measured the average inference speed (tokens/s) of generating 1792 (2048-258) and 7934 (8192-258) tokens with the context of an image (which takes 258 tokens) under BF16 precision and Int4 quantization, respectively.
| Quantization | Speed (2048 tokens) | Speed (8192 tokens) |
| ------------ | :-----------------: | :-----------------: |
| BF16 | 28.87 | 24.32 |
| Int4 | 37.79 | 34.34 |
推理速度测算是在单卡 A100-SXM4-80G GPU上运行,使用PyTorch 2.0.1及CUDA 11.4。
The profiling runs on a single A100-SXM4-80G GPU with PyTorch 2.0.1 and CUDA 11.4.
### GPU显存占用 (GPU Memory Usage)
我们还测算了在一张图片输入的条件下BF16和Int4模型生成1792 (2048-258) 和 7934 (8192-258) 个token所需显存。结果如下所示:
We also profile the peak GPU memory usage for encoding 1792 (2048-258) tokens (including an image) as context (and generating single token) and generating 7934 (8192-258) tokens (with an image as context) under BF16 or Int4 quantization level, respectively. The results are shown below.
| Quantization | Peak Usage for Encoding 2048 Tokens | Peak Usage for Generating 8192 Tokens |
| ------------ | :---------------------------------: | :-----------------------------------: |
| BF16 | 22.60GB | 28.01GB |
| Int4 | 11.82GB | 17.23GB |
上述速度和显存测算使用[此脚本](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py)完成。
The above speed and memory profiling are conducted using [this script](https://qianwen-res.oss-cn-beijing.aliyuncs.com/profile_mm.py).
<br>
## 评测
我们从两个角度评测了两个模型的能力:
1. 在**英文标准 Benchmark** 上评测模型的基础任务能力。目前评测了四大类多模态任务:
- Zero-shot Caption: 评测模型在未见过数据集上的零样本图片描述能力;
- General VQA: 评测模型的通用问答能力,例如判断题、颜色、个数、类目等问答能力;
- Text-based VQA:评测模型对于图片中文字相关的识别/问答能力,例如文档问答、图表问答、文字问答等;
- Referring Expression Compression:评测模型给定物体描述画检测框的能力;
2. **试金石 (TouchStone)**:为了评测模型整体的图文对话能力和人类对齐水平。我们为此构建了一个基于 GPT4 打分来评测 LVLM 模型的 Benchmark:TouchStone。在 TouchStone-v0.1 中:
- 评测基准总计涵盖 300+张图片、800+道题目、27个类别。包括基础属性问答、人物地标问答、影视作品问答、视觉推理、反事实推理、诗歌创作、故事写作,商品比较、图片解题等**尽可能广泛的类别**。
- 为了弥补目前 GPT4 无法直接读取图片的缺陷,我们给所有的带评测图片提供了**人工标注的充分详细描述**,并且将图片的详细描述、问题和模型的输出结果一起交给 GPT4 打分。
- 评测同时包含英文版本和中文版本。
评测结果如下:
We evaluated the model's ability from two perspectives:
1. **Standard Benchmarks**: We evaluate the model's basic task capabilities on four major categories of multimodal tasks:
- Zero-shot Caption: Evaluate model's zero-shot image captioning ability on unseen datasets;
- General VQA: Evaluate the general question-answering ability of pictures, such as the judgment, color, number, category, etc;
- Text-based VQA: Evaluate the model's ability to recognize text in pictures, such as document QA, chart QA, etc;
- Referring Expression Comprehension: Evaluate the ability to localize a target object in an image described by a referring expression.
2. **TouchStone**: To evaluate the overall text-image dialogue capability and alignment level with humans, we have constructed a benchmark called TouchStone, which is based on scoring with GPT4 to evaluate the LVLM model.
- The TouchStone benchmark covers a total of 300+ images, 800+ questions, and 27 categories. Such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc;
- In order to break the current limitation of GPT4 in terms of direct image input, TouchStone provides fine-grained image annotations by human labeling. These detailed annotations, along with the questions and the model's output, are then presented to GPT4 for scoring.
- The benchmark includes both English and Chinese versions.
The results of the evaluation are as follows:
Qwen-VL outperforms current SOTA generalist models on multiple VL tasks and has a more comprehensive coverage in terms of capability range.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/radar.png" width="600"/>
<p>
### 零样本图像描述 & 通用视觉问答 (Zero-shot Captioning & General VQA)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="2">Zero-shot Captioning</th>
<th colspan="5">General VQA</th>
</tr>
<tr>
<th>NoCaps</th>
<th>Flickr30K</th>
<th>VQAv2<sup>dev</sup></th>
<th>OK-VQA</th>
<th>GQA</th>
<th>SciQA-Img<br>(0-shot)</th>
<th>VizWiz<br>(0-shot)</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="10">Generalist<br>Models</td>
<td>Flamingo-9B</td>
<td>-</td>
<td>61.5</td>
<td>51.8</td>
<td>44.7</td>
<td>-</td>
<td>-</td>
<td>28.8</td>
</tr>
<tr>
<td>Flamingo-80B</td>
<td>-</td>
<td>67.2</td>
<td>56.3</td>
<td>50.6</td>
<td>-</td>
<td>-</td>
<td>31.6</td>
</tr>
<tr>
<td>Unified-IO-XL</td>
<td>100.0</td>
<td>-</td>
<td>77.9</td>
<td>54.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Kosmos-1</td>
<td>-</td>
<td>67.1</td>
<td>51.0</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>29.2</td>
</tr>
<tr>
<td>Kosmos-2</td>
<td>-</td>
<td>66.7</td>
<td>45.6</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>BLIP-2 (Vicuna-13B)</td>
<td>103.9</td>
<td>71.6</td>
<td>65.0</td>
<td>45.9</td>
<td>32.3</td>
<td>61.0</td>
<td>19.6</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td><strong>121.9</strong></td>
<td>82.8</td>
<td>-</td>
<td>-</td>
<td>49.5</td>
<td>63.1</td>
<td>33.4</td>
</tr>
<tr>
<td>Shikra (Vicuna-13B)</td>
<td>-</td>
<td>73.9</td>
<td>77.36</td>
<td>47.16</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td><strong>Qwen-VL (Qwen-7B)</strong></td>
<td>121.4</td>
<td><b>85.8</b></td>
<td><b>78.8</b></td>
<td><b>58.6</b></td>
<td><b>59.3</b></td>
<td>67.1</td>
<td>35.2</td>
</tr>
<!-- <tr>
<td>Qwen-VL (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>63.6</td>
<td>-</td>
<td>-</td>
<td>39.1</td>
</tr> -->
<tr>
<td>Qwen-VL-Chat</td>
<td>120.2</td>
<td>81.0</td>
<td>78.2</td>
<td>56.6</td>
<td>57.5</td>
<td><b>68.2</b></td>
<td><b>38.9</b></td>
</tr>
<!-- <tr>
<td>Qwen-VL-Chat (4-shot)</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>60.6</td>
<td>-</td>
<td>-</td>
<td>44.45</td>
</tr> -->
<tr>
<td>Previous SOTA<br>(Per Task Fine-tuning)</td>
<td>-</td>
<td>127.0<br>(PALI-17B)</td>
<td>84.5<br>(InstructBLIP<br>-FlanT5-XL)</td>
<td>86.1<br>(PALI-X<br>-55B)</td>
<td>66.1<br>(PALI-X<br>-55B)</td>
<td>72.1<br>(CFR)</td>
<td>92.53<br>(LLaVa+<br>GPT-4)</td>
<td>70.9<br>(PALI-X<br>-55B)</td>
</tr>
</tbody>
</table>
- 在 Zero-shot Caption 中,Qwen-VL 在 Flickr30K 数据集上取得了 **SOTA** 的结果,并在 Nocaps 数据集上取得了和 InstructBlip 可竞争的结果。
- 在 General VQA 中,Qwen-VL 取得了 LVLM 模型同等量级和设定下 **SOTA** 的结果。
- For zero-shot image captioning, Qwen-VL achieves the **SOTA** on Flickr30K and competitive results on Nocaps with InstructBlip.
- For general VQA, Qwen-VL achieves the **SOTA** under the same generalist LVLM scale settings.
### 文本导向的视觉问答 (Text-oriented VQA)
<table>
<thead>
<tr>
<th>Model type</th>
<th>Model</th>
<th>TextVQA</th>
<th>DocVQA</th>
<th>ChartQA</th>
<th>AI2D</th>
<th>OCR-VQA</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="5">Generalist Models</td>
<td>BLIP-2 (Vicuna-13B)</td>
<td>42.4</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>InstructBLIP (Vicuna-13B)</td>
<td>50.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>mPLUG-DocOwl (LLaMA-7B)</td>
<td>52.6</td>
<td>62.2</td>
<td>57.4</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Pic2Struct-Large (1.3B)</td>
<td>-</td>
<td><b>76.6</b></td>
<td>58.6</td>
<td>42.1</td>
<td>71.3</td>
</tr>
<tr>
<td>Qwen-VL (Qwen-7B)</td>
<td><b>63.8</b></td>
<td>65.1</td>
<td><b>65.7</b></td>
<td><b>62.3</b></td>
<td><b>75.7</b></td>
</tr>
<tr>
<td>Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>PALI-X-55B (Single-task FT)<br>(Without OCR Pipeline)</td>
<td>71.44</td>
<td>80.0</td>
<td>70.0</td>
<td>81.2</td>
<td>75.0</td>
</tr>
</tbody>
</table>
- 在文字相关的识别/问答评测上,取得了当前规模下通用 LVLM 达到的最好结果。
- 分辨率对上述某几个评测非常重要,大部分 224 分辨率的开源 LVLM 模型无法完成以上评测,或只能通过切图的方式解决。Qwen-VL 将分辨率提升到 448,可以直接以端到端的方式进行以上评测。Qwen-VL 在很多任务上甚至超过了 1024 分辨率的 Pic2Struct-Large 模型。
- In text-related recognition/QA evaluation, Qwen-VL achieves the SOTA under the generalist LVLM scale settings.
- Resolution is important for several above evaluations. While most open-source LVLM models with 224 resolution are incapable of these evaluations or can only solve these by cutting images, Qwen-VL scales the resolution to 448 so that it can be evaluated end-to-end. Qwen-VL even outperforms Pic2Struct-Large models of 1024 resolution on some tasks.
### 细粒度视觉定位 (Referring Expression Comprehension)
<table>
<thead>
<tr>
<th rowspan="2">Model type</th>
<th rowspan="2">Model</th>
<th colspan="3">RefCOCO</th>
<th colspan="3">RefCOCO+</th>
<th colspan="2">RefCOCOg</th>
<th>GRIT</th>
</tr>
<tr>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val</th>
<th>test-A</th>
<th>test-B</th>
<th>val-u</th>
<th>test-u</th>
<th>refexp</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td rowspan="8">Generalist Models</td>
<td>GPV-2</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>51.50</td>
</tr>
<tr>
<td>OFA-L*</td>
<td>79.96</td>
<td>83.67</td>
<td>76.39</td>
<td>68.29</td>
<td>76.00</td>
<td>61.75</td>
<td>67.57</td>
<td>67.58</td>
<td>61.70</td>
</tr>
<tr>
<td>Unified-IO</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td><b>78.61</b></td>
</tr>
<tr>
<td>VisionLLM-H</td>
<td></td>
<td>86.70</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>Shikra-7B</td>
<td>87.01</td>
<td>90.61</td>
<td>80.24 </td>
<td>81.60</td>
<td>87.36</td>
<td>72.12</td>
<td>82.27</td>
<td>82.19</td>
<td>69.34</td>
</tr>
<tr>
<td>Shikra-13B</td>
<td>87.83 </td>
<td>91.11</td>
<td>81.81</td>
<td>82.89</td>
<td>87.79</td>
<td>74.41</td>
<td>82.64</td>
<td>83.16</td>
<td>69.03</td>
</tr>
<tr>
<td>Qwen-VL-7B</td>
<td><b>89.36</b></td>
<td>92.26</td>
<td><b>85.34</b></td>
<td><b>83.12</b></td>
<td>88.25</td>
<td><b>77.21</b></td>
<td>85.58</td>
<td>85.48</td>
<td>78.22</td>
</tr>
<tr>
<td>Qwen-VL-7B-Chat</td>
<td>88.55</td>
<td><b>92.27</b></td>
<td>84.51</td>
<td>82.82</td>
<td><b>88.59</b></td>
<td>76.79</td>
<td><b>85.96</b></td>
<td><b>86.32</b></td>
<td>-</td>
<tr>
<td rowspan="3">Specialist SOTAs<br>(Specialist/Finetuned)</td>
<td>G-DINO-L</td>
<td>90.56 </td>
<td>93.19</td>
<td>88.24</td>
<td>82.75</td>
<td>88.95</td>
<td>75.92</td>
<td>86.13</td>
<td>87.02</td>
<td>-</td>
</tr>
<tr>
<td>UNINEXT-H</td>
<td>92.64 </td>
<td>94.33</td>
<td>91.46</td>
<td>85.24</td>
<td>89.63</td>
<td>79.79</td>
<td>88.73</td>
<td>89.37</td>
<td>-</td>
</tr>
<tr>
<td>ONE-PEACE</td>
<td>92.58 </td>
<td>94.18</td>
<td>89.26</td>
<td>88.77</td>
<td>92.21</td>
<td>83.23</td>
<td>89.22</td>
<td>89.27</td>
<td>-</td>
</tr>
</tbody>
</table>
- 在定位任务上,Qwen-VL 全面超过 Shikra-13B,取得了目前 Generalist LVLM 模型上在 Refcoco 上的 **SOTA**。
- Qwen-VL 并没有在任何中文定位数据上训练过,但通过中文 Caption 数据和 英文 Grounding 数据的训练,可以 Zero-shot 泛化出中文 Grounding 能力。
我们提供了以上**所有**评测脚本以供复现我们的实验结果。请阅读 [eval/EVALUATION.md](eval/EVALUATION.md) 了解更多信息。
- Qwen-VL achieves the **SOTA** in all above referring expression comprehension benchmarks.
- Qwen-VL has not been trained on any Chinese grounding data, but it can still generalize to the Chinese Grounding tasks in a zero-shot way by training Chinese Caption data and English Grounding data.
We provide all of the above evaluation scripts for reproducing our experimental results. Please read [eval/EVALUATION.md](eval/EVALUATION.md) for more information.
### 闲聊能力测评 (Chat Evaluation)
TouchStone 是一个基于 GPT4 打分来评测 LVLM 模型的图文对话能力和人类对齐水平的基准。它涵盖了 300+张图片、800+道题目、27个类别,包括基础属性、人物地标、视觉推理、诗歌创作、故事写作、商品比较、图片解题等**尽可能广泛的类别**。关于 TouchStone 的详细介绍,请参考[touchstone/README_CN.md](touchstone/README_CN.md)了解更多信息。
TouchStone is a benchmark based on scoring with GPT4 to evaluate the abilities of the LVLM model on text-image dialogue and alignment levels with humans. It covers a total of 300+ images, 800+ questions, and 27 categories, such as attribute-based Q&A, celebrity recognition, writing poetry, summarizing multiple images, product comparison, math problem solving, etc. Please read [touchstone/README_CN.md](touchstone/README.md) for more information.
#### 英语 (English)
| Model | Score |
|---------------|-------|
| PandaGPT | 488.5 |
| MiniGPT4 | 531.7 |
| InstructBLIP | 552.4 |
| LLaMA-AdapterV2 | 590.1 |
| mPLUG-Owl | 605.4 |
| LLaVA | 602.7 |
| Qwen-VL-Chat | 645.2 |
#### 中文 (Chinese)
| Model | Score |
|---------------|-------|
| VisualGLM | 247.1 |
| Qwen-VL-Chat | 401.2 |
Qwen-VL-Chat 模型在中英文的对齐评测中均取得当前 LVLM 模型下的最好结果。
Qwen-VL-Chat has achieved the best results in both Chinese and English alignment evaluation.
<br>
## 常见问题 (FAQ)
如遇到问题,敬请查阅 [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ_zh.md)以及issue区,如仍无法解决再提交issue。
If you meet problems, please refer to [FAQ](https://github.com/QwenLM/Qwen-VL/blob/master/FAQ.md) and the issues first to search a solution before you launch a new issue.
<br>
## 使用协议 (License Agreement)
研究人员与开发者可使用Qwen-VL和Qwen-VL-Chat或进行二次开发。我们同样允许商业使用,具体细节请查看[LICENSE](https://github.com/QwenLM/Qwen-VL/blob/master/LICENSE)。如需商用,请填写[问卷](https://dashscope.console.aliyun.com/openModelApply/qianwen)申请。
Researchers and developers are free to use the codes and model weights of both Qwen-VL and Qwen-VL-Chat. We also allow their commercial use. Check our license at [LICENSE](LICENSE) for more details.
<br>
## 引用 (Citation)
如果你觉得我们的论文和代码对你的研究有帮助,请考虑:star: 和引用 :pencil: :)
If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil: :)
```BibTeX
@article{Qwen-VL,
title={Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities},
author={Bai, Jinze and Bai, Shuai and Yang, Shusheng and Wang, Shijie and Tan, Sinan and Wang, Peng and Lin, Junyang and Zhou, Chang and Zhou, Jingren},
journal={arXiv preprint arXiv:2308.12966},
year={2023}
}
```
<br>
## 联系我们 (Contact Us)
如果你想给我们的研发团队和产品团队留言,请通过邮件([email protected])联系我们。
If you are interested to leave a message to either our research team or product team, feel free to send an email to [email protected].
```
```
|
Epiculous/Crunchy-onion-GGUF
|
Epiculous
| 2024-01-25T14:55:36Z | 56 | 1 | null |
[
"gguf",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-22T21:39:13Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with Alpaca Instruct
|
srikarthikv/distilbert-base-uncased-lora-text-classification
|
srikarthikv
| 2024-01-25T14:54:15Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T14:54:13Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0072
- Accuracy: {'accuracy': 0.88}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3560 | {'accuracy': 0.888} |
| 0.4316 | 2.0 | 500 | 0.5124 | {'accuracy': 0.878} |
| 0.4316 | 3.0 | 750 | 0.6530 | {'accuracy': 0.87} |
| 0.2331 | 4.0 | 1000 | 0.6871 | {'accuracy': 0.878} |
| 0.2331 | 5.0 | 1250 | 0.8012 | {'accuracy': 0.869} |
| 0.0918 | 6.0 | 1500 | 0.8738 | {'accuracy': 0.878} |
| 0.0918 | 7.0 | 1750 | 0.8714 | {'accuracy': 0.881} |
| 0.0349 | 8.0 | 2000 | 0.9631 | {'accuracy': 0.88} |
| 0.0349 | 9.0 | 2250 | 1.0067 | {'accuracy': 0.879} |
| 0.0071 | 10.0 | 2500 | 1.0072 | {'accuracy': 0.88} |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
srikarthikv/distilbert-base-uncasedfinetunedbert
|
srikarthikv
| 2024-01-25T14:54:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"region:us"
] | null | 2024-01-25T14:54:06Z |
---
library_name: peft
base_model: distilbert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
retdop/q-FrozenLake-v1-4x4-noSlippery
|
retdop
| 2024-01-25T14:53:17Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T14:53:15Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="retdop/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tanatapanun/fine-tuned-BioBART-50-epochs-1024-input-128-output
|
tanatapanun
| 2024-01-25T14:51:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T13:32:14Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-50-epochs-1024-input-128-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-50-epochs-1024-input-128-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9109
- Rouge1: 0.1191
- Rouge2: 0.0252
- Rougel: 0.105
- Rougelsum: 0.1059
- Gen Len: 16.2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 8.7986 | 0.0 | 0.0 | 0.0 | 0.0 | 14.54 |
| No log | 2.0 | 302 | 4.6009 | 0.007 | 0.0022 | 0.0066 | 0.0067 | 4.73 |
| No log | 3.0 | 453 | 1.9851 | 0.1025 | 0.0246 | 0.091 | 0.0906 | 13.95 |
| 6.1578 | 4.0 | 604 | 1.7001 | 0.0763 | 0.0172 | 0.0666 | 0.0674 | 10.25 |
| 6.1578 | 5.0 | 755 | 1.6023 | 0.1303 | 0.0277 | 0.1167 | 0.1164 | 15.08 |
| 6.1578 | 6.0 | 906 | 1.5322 | 0.0795 | 0.0176 | 0.0732 | 0.0736 | 14.54 |
| 1.4113 | 7.0 | 1057 | 1.4998 | 0.0972 | 0.0241 | 0.0839 | 0.0838 | 13.47 |
| 1.4113 | 8.0 | 1208 | 1.4808 | 0.0992 | 0.0238 | 0.0894 | 0.0898 | 14.28 |
| 1.4113 | 9.0 | 1359 | 1.4964 | 0.1249 | 0.0214 | 0.111 | 0.1106 | 12.36 |
| 0.8834 | 10.0 | 1510 | 1.4858 | 0.1459 | 0.0363 | 0.1235 | 0.1237 | 15.63 |
| 0.8834 | 11.0 | 1661 | 1.4990 | 0.1578 | 0.0403 | 0.1379 | 0.139 | 15.92 |
| 0.8834 | 12.0 | 1812 | 1.5210 | 0.1327 | 0.0253 | 0.1212 | 0.1209 | 15.11 |
| 0.8834 | 13.0 | 1963 | 1.5381 | 0.1372 | 0.038 | 0.1255 | 0.1251 | 15.45 |
| 0.5229 | 14.0 | 2114 | 1.5559 | 0.1383 | 0.0348 | 0.1263 | 0.1263 | 16.49 |
| 0.5229 | 15.0 | 2265 | 1.5824 | 0.1509 | 0.0369 | 0.1336 | 0.1325 | 15.78 |
| 0.5229 | 16.0 | 2416 | 1.6369 | 0.128 | 0.0298 | 0.1176 | 0.1185 | 14.12 |
| 0.2708 | 17.0 | 2567 | 1.6393 | 0.1362 | 0.0429 | 0.1229 | 0.1229 | 15.77 |
| 0.2708 | 18.0 | 2718 | 1.6599 | 0.1521 | 0.0402 | 0.1329 | 0.1333 | 15.34 |
| 0.2708 | 19.0 | 2869 | 1.6705 | 0.1293 | 0.0265 | 0.1165 | 0.1166 | 16.51 |
| 0.1203 | 20.0 | 3020 | 1.6943 | 0.141 | 0.0289 | 0.1273 | 0.1275 | 15.69 |
| 0.1203 | 21.0 | 3171 | 1.6969 | 0.1253 | 0.0337 | 0.1081 | 0.1085 | 16.35 |
| 0.1203 | 22.0 | 3322 | 1.7431 | 0.1319 | 0.0272 | 0.1185 | 0.1185 | 15.63 |
| 0.1203 | 23.0 | 3473 | 1.7434 | 0.1357 | 0.0343 | 0.1253 | 0.125 | 16.39 |
| 0.0509 | 24.0 | 3624 | 1.7507 | 0.1375 | 0.0325 | 0.1233 | 0.1231 | 16.79 |
| 0.0509 | 25.0 | 3775 | 1.7776 | 0.1222 | 0.0328 | 0.1121 | 0.1121 | 16.18 |
| 0.0509 | 26.0 | 3926 | 1.7733 | 0.1265 | 0.0216 | 0.1166 | 0.117 | 16.25 |
| 0.0257 | 27.0 | 4077 | 1.8001 | 0.1238 | 0.0239 | 0.1116 | 0.1113 | 16.44 |
| 0.0257 | 28.0 | 4228 | 1.7955 | 0.1173 | 0.0221 | 0.103 | 0.1046 | 16.64 |
| 0.0257 | 29.0 | 4379 | 1.8143 | 0.1311 | 0.0273 | 0.1186 | 0.1183 | 16.78 |
| 0.0164 | 30.0 | 4530 | 1.8108 | 0.1331 | 0.0296 | 0.1219 | 0.1226 | 15.64 |
| 0.0164 | 31.0 | 4681 | 1.8184 | 0.1245 | 0.0339 | 0.1134 | 0.1143 | 16.55 |
| 0.0164 | 32.0 | 4832 | 1.8545 | 0.1101 | 0.0217 | 0.0982 | 0.0998 | 16.09 |
| 0.0164 | 33.0 | 4983 | 1.8550 | 0.1421 | 0.0322 | 0.1292 | 0.1296 | 16.07 |
| 0.0117 | 34.0 | 5134 | 1.8573 | 0.1309 | 0.0292 | 0.1192 | 0.1193 | 16.0 |
| 0.0117 | 35.0 | 5285 | 1.8453 | 0.1254 | 0.0238 | 0.1133 | 0.1139 | 16.55 |
| 0.0117 | 36.0 | 5436 | 1.8724 | 0.1167 | 0.0241 | 0.1024 | 0.1035 | 15.89 |
| 0.0089 | 37.0 | 5587 | 1.8761 | 0.1345 | 0.0275 | 0.1206 | 0.1208 | 15.87 |
| 0.0089 | 38.0 | 5738 | 1.8772 | 0.1338 | 0.0301 | 0.1216 | 0.1228 | 16.78 |
| 0.0089 | 39.0 | 5889 | 1.8654 | 0.134 | 0.0264 | 0.1193 | 0.1196 | 16.85 |
| 0.0071 | 40.0 | 6040 | 1.8812 | 0.129 | 0.0287 | 0.1181 | 0.1177 | 16.12 |
| 0.0071 | 41.0 | 6191 | 1.8838 | 0.1238 | 0.0274 | 0.1134 | 0.1134 | 16.29 |
| 0.0071 | 42.0 | 6342 | 1.8752 | 0.1334 | 0.0262 | 0.1209 | 0.1214 | 16.66 |
| 0.0071 | 43.0 | 6493 | 1.8993 | 0.1238 | 0.0254 | 0.1111 | 0.1113 | 16.31 |
| 0.0056 | 44.0 | 6644 | 1.8963 | 0.1279 | 0.0346 | 0.1133 | 0.1154 | 16.07 |
| 0.0056 | 45.0 | 6795 | 1.9079 | 0.1225 | 0.0261 | 0.108 | 0.1084 | 16.09 |
| 0.0056 | 46.0 | 6946 | 1.9132 | 0.129 | 0.025 | 0.1157 | 0.1154 | 16.26 |
| 0.0045 | 47.0 | 7097 | 1.9120 | 0.1419 | 0.0362 | 0.1275 | 0.1278 | 15.78 |
| 0.0045 | 48.0 | 7248 | 1.9069 | 0.1316 | 0.0253 | 0.1161 | 0.1165 | 16.38 |
| 0.0045 | 49.0 | 7399 | 1.9099 | 0.1206 | 0.0259 | 0.1074 | 0.1077 | 16.32 |
| 0.0041 | 50.0 | 7550 | 1.9109 | 0.1191 | 0.0252 | 0.105 | 0.1059 | 16.2 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
MoulikBansal/test_test_chal_ja_bhai
|
MoulikBansal
| 2024-01-25T14:35:30Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | 2024-01-24T18:21:18Z |
---
license: mit
library_name: peft
tags:
- generated_from_trainer
base_model: microsoft/phi-1_5
model-index:
- name: test_test_chal_ja_bhai
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_test_chal_ja_bhai
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 1000
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Norod78/SD15-BambaBaby-LoRA
|
Norod78
| 2024-01-25T14:32:09Z | 4 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"character",
"baby",
"bamba",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:other",
"region:us"
] |
text-to-image
| 2024-01-25T14:32:07Z |
---
license: other
license_name: bespoke-lora-trained-license
license_link: https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=Rent&allowDerivatives=True&allowDifferentLicense=False
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- character
- baby
- bamba
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: BambaBaby
widget:
- text: 'A professional photo of BambaBaby Dancing in the streets of new-york '
output:
url: >-
5870247.jpeg
- text: 'A photo of a BambaBaby sitting at a pub with a glass of milk in front of him '
output:
url: >-
5870250.jpeg
- text: 'A professional shot of a BambaBaby surfing in the ocean '
output:
url: >-
5870243.jpeg
- text: 'A full body shot of a wooden BambaBaby Pinocchio puppet '
output:
url: >-
5870242.jpeg
- text: 'A cute album-cover photo of BambaBaby smoking a joint next to a cannabis plant '
output:
url: >-
5870245.jpeg
---
# BambaBaby
<Gallery />
([CivitAI](https://civitai.com/models/277078))
## Model description
<p>An SD v1.5 LoRA of the Baby Bamba mascot, which advertises Bamba </p><p>Bamba is a snack made of peanut-butter-flavored puffed maize manufactured by the Osem corporation.</p><p></p><p>Use <em>BambaBaby</em> in your prompts to refer to this character</p>
## Trigger words
You should use `BambaBaby` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Norod78/bambababy/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Norod78/bambababy', weight_name='BambaBaby.safetensors')
image = pipeline('A cute album-cover photo of BambaBaby smoking a joint next to a cannabis plant ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Epiculous/Crunchy-onion
|
Epiculous
| 2024-01-25T14:31:39Z | 1,359 | 7 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"dataset:lemonilia/LimaRP",
"dataset:grimulkan/theory-of-mind",
"dataset:Epiculous/Gnosis",
"license:agpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-21T17:06:45Z |
---
license: agpl-3.0
datasets:
- lemonilia/LimaRP
- grimulkan/theory-of-mind
- Epiculous/Gnosis
---
# Crunchy-onion
This model is created by training Mixtral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).
The 4-bit qlora was then merged into Mixtral Instruct resulting in what you see here.
Works best with Alpaca Instruct
|
VincentG1234/Model_2_GPT2_frenchdata
|
VincentG1234
| 2024-01-25T14:28:23Z | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T13:55:53Z |
---
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Model_2_GPT2_frenchdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Model_2_GPT2_frenchdata
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
sahild23d23e23re23r/sign-lan-model
|
sahild23d23e23re23r
| 2024-01-25T14:25:35Z | 174 | 0 |
transformers
|
[
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T13:57:23Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: sign-lan-model
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9993103448275862
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sign-lan-model
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0087
- Accuracy: 0.9993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2769 | 1.0 | 611 | 0.0087 | 0.9993 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0
- Datasets 2.12.0
- Tokenizers 0.14.1
|
am-infoweb/rap_phase2_MODEL2_22jan_15i
|
am-infoweb
| 2024-01-25T14:21:25Z | 100 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-25T13:41:50Z |
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: rap_phase2_MODEL2_22jan_15i
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rap_phase2_MODEL2_22jan_15i
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0245
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5023 | 1.0 | 990 | 0.2146 |
| 0.1764 | 2.0 | 1980 | 0.0279 |
| 0.1017 | 3.0 | 2970 | 0.0540 |
| 0.1148 | 4.0 | 3960 | 0.0332 |
| 0.0769 | 5.0 | 4950 | 0.1151 |
| 0.0564 | 6.0 | 5940 | 0.0497 |
| 0.0606 | 7.0 | 6930 | 0.0357 |
| 0.0219 | 8.0 | 7920 | 0.0250 |
| 0.023 | 9.0 | 8910 | 0.0252 |
| 0.0223 | 10.0 | 9900 | 0.0245 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
MeghaMurali/videomae-base-foodpickup-customdata
|
MeghaMurali
| 2024-01-25T14:15:59Z | 47 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-01-25T09:15:58Z |
---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-foodpickup-customdata
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-foodpickup-customdata
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6326
- Accuracy: 0.52
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 28
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.29 | 8 | 0.6667 | 0.4348 |
| 0.6875 | 1.29 | 16 | 0.7287 | 0.3913 |
| 0.5114 | 2.29 | 24 | 0.7178 | 0.4348 |
| 0.5114 | 3.14 | 28 | 0.7075 | 0.4348 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ayeshgk/codet5-small-ft-v10
|
ayeshgk
| 2024-01-25T14:11:43Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Salesforce/codet5-small",
"base_model:finetune:Salesforce/codet5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T13:54:04Z |
---
license: apache-2.0
base_model: Salesforce/codet5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: codet5-small-ft-v10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codet5-small-ft-v10
This model is a fine-tuned version of [Salesforce/codet5-small](https://huggingface.co/Salesforce/codet5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6076
- Rouge1: 64.7797
- Rouge2: 49.8902
- Rougel: 62.433
- Rougelsum: 61.8109
- Gen Len: 12.9286
## Model description
More information needed
### FORMAT
[BUG] reader = ReaderFactory.newXmlReader( null) ; [CONTEXT] outputFile = output.getFile(); } Reader reader; try { if (inputEncoding != null) { if (parser.getType() == Parser.XML_TYPE) { <extra_id_0> } else { reader = ReaderFactory.newReader(inputFile, inputEncoding); } } else { reader = ReaderFactory.newPlatformReader(inputFile); }
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 19 | 1.9088 | 54.5113 | 41.9981 | 53.7739 | 53.7474 | 12.3286 |
| No log | 2.0 | 38 | 1.1442 | 66.1136 | 56.2675 | 65.0547 | 64.8304 | 12.4857 |
| No log | 3.0 | 57 | 0.6821 | 67.6198 | 52.8142 | 64.9905 | 64.3787 | 13.3429 |
| No log | 4.0 | 76 | 0.6076 | 64.7797 | 49.8902 | 62.433 | 61.8109 | 12.9286 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
simonycl/data-selection-Llama-2-7b-sharegpt-KMenasRandomDeita-1024-0.05-lora-epoch_4
|
simonycl
| 2024-01-25T14:07:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-25T14:06:43Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
MoulikBansal/test_test_chal_ja_bhai-merged-model
|
MoulikBansal
| 2024-01-25T14:02:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T14:02:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SoreHabte/distilbert-base-uncased-lora-text-classification
|
SoreHabte
| 2024-01-25T14:01:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T14:01:35Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-lora-text-classification
results: []
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-lora-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9528
- Accuracy: {'accuracy': 0.887}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|
| No log | 1.0 | 250 | 0.3796 | {'accuracy': 0.88} |
| 0.4157 | 2.0 | 500 | 0.4191 | {'accuracy': 0.879} |
| 0.4157 | 3.0 | 750 | 0.6114 | {'accuracy': 0.867} |
| 0.1906 | 4.0 | 1000 | 0.5635 | {'accuracy': 0.882} |
| 0.1906 | 5.0 | 1250 | 0.7240 | {'accuracy': 0.879} |
| 0.0727 | 6.0 | 1500 | 0.8097 | {'accuracy': 0.883} |
| 0.0727 | 7.0 | 1750 | 0.9097 | {'accuracy': 0.888} |
| 0.0275 | 8.0 | 2000 | 0.9154 | {'accuracy': 0.887} |
| 0.0275 | 9.0 | 2250 | 0.9432 | {'accuracy': 0.886} |
| 0.0133 | 10.0 | 2500 | 0.9528 | {'accuracy': 0.887} |
### Framework versions
- PEFT 0.4.0
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rcade/testing_model
|
rcade
| 2024-01-25T13:59:51Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T13:57:29Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: testing_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testing_model
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4629
- Accuracy: 0.8358
- F1: 0.8874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5682 | 1.0 | 230 | 0.4783 | 0.7990 | 0.8669 |
| 0.3425 | 2.0 | 460 | 0.4264 | 0.8333 | 0.8863 |
| 0.1752 | 3.0 | 690 | 0.4629 | 0.8358 | 0.8874 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
GaganpreetSingh/Mistral-7b-constitutional_India
|
GaganpreetSingh
| 2024-01-25T13:50:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T13:50:43Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jimmyhd/oneTextCol
|
Jimmyhd
| 2024-01-25T13:48:35Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:13:04Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
EzraWilliam/wav2vec2-base-fleurs-CommonVoice-demo-google-colab-Ezra_William_Prod1
|
EzraWilliam
| 2024-01-25T13:40:51Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:xtreme_s",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-22T09:13:10Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- wer
model-index:
- name: wav2vec2-base-fleurs-CommonVoice-demo-google-colab-Ezra_William_Prod1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: xtreme_s
type: xtreme_s
config: fleurs.id_id
split: test
args: fleurs.id_id
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-fleurs-CommonVoice-demo-google-colab-Ezra_William_Prod1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1819
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.236 | 6.15 | 100 | 2.9095 | 1.0 |
| 2.8926 | 12.31 | 200 | 2.8601 | 1.0 |
| 2.871 | 18.46 | 300 | 2.8546 | 1.0 |
| 2.763 | 24.62 | 400 | 2.1819 | 1.0 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Skier8402/XLNetTokenizerFast_unigram_like
|
Skier8402
| 2024-01-25T13:37:11Z | 0 | 0 | null |
[
"NLP",
"Unigram",
"en",
"dataset:wikitext",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T12:50:41Z |
---
license: apache-2.0
datasets:
- wikitext
language:
- en
tags:
- NLP
- Unigram
---
|
logame07/corgy_dog_LoRA
|
logame07
| 2024-01-25T13:35:29Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-07T19:23:14Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of me
license: openrail++
---
# SDXL LoRA DreamBooth - logame07/corgy_dog_LoRA
<Gallery />
## Model description
These are logame07/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of me to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](logame07/corgy_dog_LoRA/tree/main) them in the Files & versions tab.
|
aymurai/flair-ner-spanish-judicial
|
aymurai
| 2024-01-25T13:26:28Z | 442 | 4 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"es",
"dataset:ArJuzPCyF10",
"license:mit",
"region:us"
] |
token-classification
| 2023-01-26T13:18:50Z |
---
license: mit
language:
- es
tags:
- flair
- token-classification
- sequence-tagger-model
datasets:
- ArJuzPCyF10
metrics:
- precision
- recall
- f1-score
widget:
- text: 1. DECLARAR EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento de la suspensión del proceso a prueba, y SOBRESEER a EZEQUIEL CAMILO MARCONNI, DNI 11.222.333, en orden a los delitos de lesiones leves agravadas, amenazas simples y agravadas por el uso de armas.
library_name: flair
pipeline_tag: token-classification
---
# Model Description
Following the FLAIR guidelines for training a NER model, we trained a model on top of [BETO embeddings](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) (a spanish version of BERT trained in a spanish corpus) and a BiLSTM-CRF architecture.
This model was developed by [{ collective.ai }](https://collectiveai.io) as part of the [AymurAI](https://www.aymurai.info) project by [DataGenero](https://www.datagenero.org).
# About AymurAI, its uses and limitations
AymurAI is intended to be used as a tool to address the lack of available data in the judicial system on gender-based violence (GBV) rulings in Latin America. The goal is to increase report levels, build trust in the justice system, and improve access to justice for women and LGBTIQ+ people. AymurAI will generate and maintain anonymized datasets from legal rulings to understand GBV and support policy making, and also contribute to feminist collectives' campaigns.
AymurAI is still a prototype and is only being implemented in Criminal Court N°10 in the City of Buenos Aires, Argentina. Its capabilities are limited to semi-automated data collection and analysis, and the results may be subject to limitations such as the quality and consistency of the data, and the availability of the data. Additionally, the effectiveness of AymurAI in addressing the lack of transparency in the judicial system and improving access to justice may also depend on other factors such as the level of cooperation from court officials and the broader cultural and political context.
This model was trained with a closed dataset from an Argentine criminal court. It's is designed to identify and extract relevant information from court rulings related to GBV cases. The use of a domain specific dataset from an Argentine criminal court ensures that the model is tailored to the specific legal and cultural context, allowing for more accurate results. However, it also means that the model may not be applicable or effective in other countries or regions with different legal systems or cultural norms.
# Usage
## How to use the model in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)**.
Install it with `pip install flair`
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("aymurai/flair-ner-spanish-judicial")
# make example sentence
sentence = Sentence("1. DECLARAR EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento de la suspensión del proceso a prueba, y SOBRESEER a EZEQUIEL CAMILO MARCONNI, DNI 11.222.333, en orden a los delitos de lesiones leves agravadas, amenazas simples y agravadas por el uso de armas.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span[2:11]: "EXTINGUIDA LA ACCIÓN PENAL en este caso por cumplimiento" → DETALLE (0.5498)
Span[13:18]: "suspensión del proceso a prueba" → OBJETO_DE_LA_RESOLUCION (0.5647)
Span[20:21]: "SOBRESEER" → DETALLE (0.7766)
Span[22:25]: "EZEQUIEL CAMILO MARCONNI" → NOMBRE (0.6454)
Span[35:36]: "lesiones" → CONDUCTA (0.9457)
Span[36:38]: "leves agravadas" → CONDUCTA_DESCRIPCION (0.8818)
Span[39:40]: "amenazas" → CONDUCTA (0.956)
Span[40:48]: "simples y agravadas por el uso de armas" → CONDUCTA_DESCRIPCION (0.6866)
```
## Using the model in AymurAI platform
Please refeer to [aymurai.info](https://www.aymurai.info) for more information of the full platform.
You can also check the development repository [here](https://github.com/aymurai/dev).
# Entities and metrics
## Description
Please refer to the entities' description table ([en](docs/en-entities-table.md)|[es](docs/es-entities-table.md)).
For a complete description about entities considered by AymurAI, refer to the [Glossary for the Dataset with gender perspective](https://docs.google.com/document/d/123B9T2abCEqBaxxOl5c7HBJZRdIMtKDWo6IKHIVil04/edit) written by [Criminal Court N°10](https://www.twitter.com/jpcyf10) (spanish only).
## Data
The model was trained with a dataset of 1200 legal rulings from an Argentine criminal court.
Due to the nature of the data the documents are kept private.
### List of annotation contributors
The dataset was manually annotated by:
* Diego Scopetta
* Franny Rodriguez Gerzovich ([email]([email protected])|[linkedin](https://www.linkedin.com/in/francescarg))
* Laura Barreiro
* Matías Sosa
* Maximiliano Sosa
* Patricia Sandoval
* Santiago Bezchinsky ([email]([email protected])|[linkedin](https://www.linkedin.com/in/santiago-bezchinsky))
* Zoe Rodriguez Gerzovich
## Metrics
| label | precision | recall | f1-score |
|-----------------------------------------------------|-----------|--------|----------|
| FECHA_DE_NACIMIENTO | 0.98 | 0.99 | 0.99 |
| FECHA_RESOLUCION | 0.95 | 0.98 | 0.96 |
| NACIONALIDAD | 0.94 | 0.98 | 0.96 |
| GENERO | 1.00 | 0.50 | 0.67 |
| HORA_DE_INICIO | 0.98 | 0.92 | 0.95 |
| NOMBRE | 0.94 | 0.95 | 0.95 |
| FRASES_AGRESION | 0.90 | 0.98 | 0.94 |
| HORA_DE_CIERRE | 0.90 | 0.92 | 0.91 |
| NIVEL_INSTRUCCION | 0.85 | 0.94 | 0.90 |
| N_EXPTE_EJE | 0.85 | 0.93 | 0.89 |
| TIPO_DE_RESOLUCION | 0.63 | 0.93 | 0.75 |
| VIOLENCIA_DE_GENERO | 0.49 | 0.59 | 0.54 |
| RELACION_Y_TIPO_ENTRE_ACUSADO/A_Y_DENUNCIANTE | 0.93 | 0.76 | 0.84 |
| HIJOS_HIJAS_EN_COMUN | 0.47 | 0.57 | 0.52 |
| MODALIDAD_DE_LA_VIOLENCIA | 0.57 | 0.56 | 0.57 |
| FECHA_DEL_HECHO | 0.83 | 0.83 | 0.83 |
| CONDUCTA | 0.79 | 0.67 | 0.73 |
| ART_INFRINGIDO | 0.76 | 0.74 | 0.75 |
| DETALLE | 0.53 | 0.37 | 0.43 |
| OBJETO_DE_LA_RESOLUCION | 0.60 | 0.78 | 0.68 |
| CONDUCTA_DESCRIPCION | 0.54 | 0.43 | 0.48 |
| LUGAR_DEL_HECHO | 0.75 | 0.47 | 0.58 |
| EDAD_AL_MOMENTO_DEL_HECHO | 0.50 | 0.20 | 0.29 |
| PERSONA_ACUSADA_NO_DETERMINADA | 0.71 | 0.19 | 0.30 |
| | | | |
| macro avg | 0.77 | 0.72 | 0.73 |
# GitHub
You can see our open-source development [here](https://github.com/AymurAI/).
# Citation
Please cite [the following paper](https://drive.google.com/file/d/1P-hW0JKXWZ44Fn94fDVIxQRTExkK6m4Y/view) when using AymurAI:
```bibtex
@techreport{feldfeber2022,
author = "Feldfeber, Ivana and Quiroga, Yasmín Belén and Guevara, Clarissa and Ciolfi Felice, Marianela",
title = "Feminisms in Artificial Intelligence: Automation Tools towards a Feminist Judiciary Reform in Argentina and Mexico",
institution = "DataGenero",
year = "2022",
url = "https://drive.google.com/file/d/1P-hW0JKXWZ44Fn94fDVIxQRTExkK6m4Y/view"
}
```
|
CLMBR/npi-sim-ques-lstm-3
|
CLMBR
| 2024-01-25T13:23:33Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-01-19T12:23:38Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-sim-ques-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-sim-ques-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9715
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8041 | 0.03 | 76320 | 4.7586 |
| 4.5148 | 1.03 | 152640 | 4.4771 |
| 4.3758 | 0.03 | 228960 | 4.3429 |
| 4.286 | 1.03 | 305280 | 4.2606 |
| 4.2235 | 0.03 | 381600 | 4.2032 |
| 4.1769 | 1.03 | 457920 | 4.1622 |
| 4.1388 | 0.03 | 534240 | 4.1311 |
| 4.1054 | 1.03 | 610560 | 4.1065 |
| 4.0766 | 0.03 | 686880 | 4.0882 |
| 4.0509 | 1.03 | 763200 | 4.0714 |
| 4.0307 | 0.03 | 839520 | 4.0580 |
| 4.0163 | 1.03 | 915840 | 4.0468 |
| 3.9939 | 0.03 | 992160 | 4.0374 |
| 3.9762 | 0.03 | 1068480 | 4.0297 |
| 3.9649 | 1.03 | 1144800 | 4.0224 |
| 3.9597 | 0.03 | 1221120 | 4.0163 |
| 3.9488 | 1.03 | 1297440 | 4.0111 |
| 3.9365 | 0.03 | 1373760 | 4.0063 |
| 3.9318 | 1.03 | 1450080 | 4.0024 |
| 3.9244 | 0.03 | 1526400 | 3.9980 |
| 3.9184 | 1.03 | 1602720 | 3.9953 |
| 3.9142 | 0.03 | 1679040 | 3.9931 |
| 3.9091 | 1.03 | 1755360 | 3.9907 |
| 3.9024 | 0.03 | 1831680 | 3.9885 |
| 3.8957 | 1.03 | 1908000 | 3.9863 |
| 3.8879 | 0.03 | 1984320 | 3.9844 |
| 3.8829 | 1.03 | 2060640 | 3.9828 |
| 3.8797 | 0.03 | 2136960 | 3.9809 |
| 3.8701 | 1.03 | 2213280 | 3.9796 |
| 3.8611 | 0.03 | 2289600 | 3.9784 |
| 3.8603 | 0.03 | 2365920 | 3.9774 |
| 3.8637 | 1.03 | 2442240 | 3.9764 |
| 3.8601 | 0.03 | 2518560 | 3.9757 |
| 3.8531 | 0.03 | 2594880 | 3.9747 |
| 3.855 | 0.03 | 2671200 | 3.9740 |
| 3.8516 | 1.03 | 2747520 | 3.9734 |
| 3.8512 | 0.03 | 2823840 | 3.9728 |
| 3.852 | 0.03 | 2900160 | 3.9722 |
| 3.8529 | 1.03 | 2976480 | 3.9717 |
| 3.8489 | 0.02 | 3052726 | 3.9715 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mmnga/cyberagent-calm2-7b-chat-dpo-experimental-gguf
|
mmnga
| 2024-01-25T13:14:27Z | 213 | 4 | null |
[
"gguf",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T12:28:54Z |
---
license: cc-by-4.0
---
# cyberagent-calm2-7b-chat-dpo-experimental-gguf
[cyberagentさんが公開しているcalm2-7b-chat-dpo-experimental](https://huggingface.co/cyberagent/calm2-7b-chat-dpo-experimental)のggufフォーマット変換版です。
## convert
[calm2のconvert.py](https://gist.github.com/mmnga/6f3b0b9db773c7cc4d58421ea7ebf98e)です
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./main -m 'cyberagent-calm2-7b-chat-dpo-experimental-q4_0.gguf' -p "USER: AIによって私達の暮らしはどのように変わりますか? \nASSISTANT: " -n 256 --temp 0.8
```
---
|
tanatapanun/fine-tuned-BioBART-20-epochs-1024-input-128-output
|
tanatapanun
| 2024-01-25T13:05:30Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T12:38:46Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-20-epochs-1024-input-128-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-20-epochs-1024-input-128-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6050
- Rouge1: 0.1704
- Rouge2: 0.0496
- Rougel: 0.138
- Rougelsum: 0.1356
- Gen Len: 34.1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.5303 | 0.0 | 0.0 | 0.0 | 0.0 | 12.58 |
| No log | 2.0 | 302 | 1.9967 | 0.1046 | 0.0318 | 0.0908 | 0.0904 | 26.22 |
| No log | 3.0 | 453 | 1.6736 | 0.0447 | 0.0076 | 0.036 | 0.0353 | 15.66 |
| 4.5402 | 4.0 | 604 | 1.5728 | 0.1397 | 0.0344 | 0.1068 | 0.1079 | 34.51 |
| 4.5402 | 5.0 | 755 | 1.5231 | 0.1675 | 0.0345 | 0.1325 | 0.1328 | 34.4 |
| 4.5402 | 6.0 | 906 | 1.4986 | 0.1195 | 0.0287 | 0.0863 | 0.0873 | 38.66 |
| 1.1958 | 7.0 | 1057 | 1.4791 | 0.1478 | 0.0379 | 0.1172 | 0.1176 | 35.41 |
| 1.1958 | 8.0 | 1208 | 1.4802 | 0.1459 | 0.0368 | 0.1066 | 0.108 | 32.5 |
| 1.1958 | 9.0 | 1359 | 1.4841 | 0.1687 | 0.0289 | 0.1342 | 0.1345 | 30.89 |
| 0.7933 | 10.0 | 1510 | 1.5005 | 0.1457 | 0.035 | 0.1125 | 0.1103 | 34.3 |
| 0.7933 | 11.0 | 1661 | 1.5101 | 0.1808 | 0.0364 | 0.1498 | 0.1505 | 31.33 |
| 0.7933 | 12.0 | 1812 | 1.5262 | 0.1882 | 0.0419 | 0.1553 | 0.1549 | 31.65 |
| 0.7933 | 13.0 | 1963 | 1.5481 | 0.167 | 0.032 | 0.1381 | 0.139 | 31.04 |
| 0.5232 | 14.0 | 2114 | 1.5494 | 0.1723 | 0.0442 | 0.1407 | 0.138 | 34.88 |
| 0.5232 | 15.0 | 2265 | 1.5590 | 0.1801 | 0.0318 | 0.142 | 0.1413 | 37.99 |
| 0.5232 | 16.0 | 2416 | 1.5829 | 0.1608 | 0.0353 | 0.1249 | 0.1249 | 33.97 |
| 0.3565 | 17.0 | 2567 | 1.5837 | 0.1535 | 0.0354 | 0.1159 | 0.115 | 35.96 |
| 0.3565 | 18.0 | 2718 | 1.5977 | 0.1565 | 0.0349 | 0.1244 | 0.1227 | 34.29 |
| 0.3565 | 19.0 | 2869 | 1.6002 | 0.169 | 0.0428 | 0.1358 | 0.1331 | 34.84 |
| 0.2734 | 20.0 | 3020 | 1.6050 | 0.1704 | 0.0496 | 0.138 | 0.1356 | 34.1 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
fashxp/zero-shot-image-classification
|
fashxp
| 2024-01-25T12:50:12Z | 0 | 0 |
generic
|
[
"generic",
"vision",
"zero-shot-image-classification",
"endpoints-template",
"base_model:openai/clip-vit-large-patch14",
"base_model:finetune:openai/clip-vit-large-patch14",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2024-01-25T10:25:18Z |
---
tags:
- vision
- zero-shot-image-classification
- endpoints-template
inference: true
pipeline_tag: zero-shot-image-classification
base_model: openai/clip-vit-large-patch14
library_name: generic
---
# Fork of [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) for a `zero-sho-image-classification` Inference endpoint.
This repository implements a `custom` task for `zero-shot-image-classification` for 🤗 Inference Endpoints. The code for the customized
pipeline is in the handler.py.
To use deploy this model an Inference Endpoint you have to select `Custom` as task to use the `handler.py` file.
### expected Request payload
```json
{
"image": encoded_image,
"parameters": {
"candidate_labels": "green, yellow, blue, white, silver"
}
}
```
|
sosancn/clasificador-muchocine
|
sosancn
| 2024-01-25T12:49:57Z | 90 | 0 |
transformers
|
[
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T12:49:35Z |
---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4858
- Accuracy: 0.4297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3230 | 0.3806 |
| 1.3897 | 2.0 | 776 | 1.3217 | 0.4258 |
| 0.9605 | 3.0 | 1164 | 1.4858 | 0.4297 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tavtav/mistral-test-model3
|
tavtav
| 2024-01-25T12:46:51Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T10:52:19Z |
trained on fanfiction data and roleplay data
no specific format was used for the dataset. Aka, not used for alpaca or pyg format. The quality is questionable.
|
CLMBR/npi-only-transformer-2
|
CLMBR
| 2024-01-25T12:28:06Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:30:13Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-only-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-only-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2265 | 0.03 | 76320 | 4.1999 |
| 4.0215 | 1.03 | 152640 | 4.0292 |
| 3.9133 | 0.03 | 228960 | 3.9544 |
| 3.8411 | 1.03 | 305280 | 3.9125 |
| 3.7911 | 0.03 | 381600 | 3.8867 |
| 3.7507 | 1.03 | 457920 | 3.8704 |
| 3.7187 | 0.03 | 534240 | 3.8599 |
| 3.6904 | 1.03 | 610560 | 3.8540 |
| 3.6628 | 0.03 | 686880 | 3.8482 |
| 3.6345 | 1.03 | 763200 | 3.8461 |
| 3.6098 | 0.03 | 839520 | 3.8440 |
| 3.5897 | 1.03 | 915840 | 3.8438 |
| 3.572 | 0.03 | 992160 | 3.8438 |
| 3.5528 | 1.03 | 1068480 | 3.8441 |
| 3.5343 | 0.03 | 1144800 | 3.8445 |
| 3.5293 | 1.03 | 1221120 | 3.8463 |
| 3.5111 | 0.03 | 1297440 | 3.8466 |
| 3.499 | 1.03 | 1373760 | 3.8482 |
| 3.4854 | 0.03 | 1450080 | 3.8505 |
| 3.4724 | 0.03 | 1526400 | 3.8520 |
| 3.4654 | 1.03 | 1602720 | 3.8540 |
| 3.455 | 0.03 | 1679040 | 3.8551 |
| 3.4482 | 1.03 | 1755360 | 3.8560 |
| 3.4398 | 0.03 | 1831680 | 3.8573 |
| 3.4268 | 1.03 | 1908000 | 3.8586 |
| 3.4123 | 0.03 | 1984320 | 3.8589 |
| 3.3989 | 0.03 | 2060640 | 3.8610 |
| 3.3867 | 1.03 | 2136960 | 3.8618 |
| 3.3755 | 0.03 | 2213280 | 3.8635 |
| 3.364 | 1.03 | 2289600 | 3.8636 |
| 3.3533 | 0.03 | 2365920 | 3.8637 |
| 3.3544 | 0.03 | 2442240 | 3.8637 |
| 3.3405 | 1.03 | 2518560 | 3.8649 |
| 3.3325 | 0.03 | 2594880 | 3.8654 |
| 3.3219 | 1.03 | 2671200 | 3.8645 |
| 3.3145 | 0.03 | 2747520 | 3.8646 |
| 3.3084 | 1.03 | 2823840 | 3.8649 |
| 3.3026 | 0.03 | 2900160 | 3.8641 |
| 3.299 | 1.03 | 2976480 | 3.8632 |
| 3.2939 | 0.02 | 3052726 | 3.8616 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Athuin/tinyLama-german
|
Athuin
| 2024-01-25T12:14:04Z | 115 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/tinyllama",
"base_model:finetune:unsloth/tinyllama",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T12:10:15Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/tinyllama
---
# Uploaded model
- **Developed by:** Athuin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/tinyllama
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rcade/test_falcon_model_learning
|
rcade
| 2024-01-25T12:03:14Z | 92 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-24T14:11:14Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test_falcon_model_learning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_falcon_model_learning
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4713
- Accuracy: 0.8358
- F1: 0.8870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5838 | 1.0 | 230 | 0.5101 | 0.7672 | 0.8455 |
| 0.3994 | 2.0 | 460 | 0.5092 | 0.8088 | 0.875 |
| 0.2417 | 3.0 | 690 | 0.4713 | 0.8358 | 0.8870 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
LoneStriker/Everyone-Coder-33b-Base-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-25T11:52:57Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:38:28Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
microsoft/beit-large-finetuned-ade-640-640
|
microsoft
| 2024-01-25T11:51:04Z | 486 | 14 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"beit",
"vision",
"image-segmentation",
"dataset:scene_parse_150",
"arxiv:2106.08254",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-segmentation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- vision
- image-segmentation
datasets:
- scene_parse_150
widget:
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg
example_title: House
- src: https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000002.jpg
example_title: Castle
---
# BEiT (large-sized model, fine-tuned on ADE20k)
BEiT model pre-trained in a self-supervised fashion on ImageNet-21k (14 million images, 21,841 classes) at resolution 224x224, and fine-tuned on [ADE20k](https://huggingface.co/datasets/scene_parse_150) (an important benchmark for semantic segmentation of images) at resolution 640x640. It was introduced in the paper [BEIT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) by Hangbo Bao, Li Dong and Furu Wei and first released in [this repository](https://github.com/microsoft/unilm/tree/master/beit).
Disclaimer: The team releasing BEiT did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The BEiT model is a Vision Transformer (ViT), which is a transformer encoder model (BERT-like). In contrast to the original ViT model, BEiT is pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. The pre-training objective for the model is to predict visual tokens from the encoder of OpenAI's DALL-E's VQ-VAE, based on masked patches.
Next, the model was fine-tuned in a supervised fashion on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. Contrary to the original ViT models, BEiT models do use relative position embeddings (similar to T5) instead of absolute position embeddings, and perform classification of images by mean-pooling the final hidden states of the patches, instead of placing a linear layer on top of the final hidden state of the [CLS] token.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: for semantic segmentation, one can just add one of the decode heads available in the [mmseg library](https://github.com/open-mmlab/mmsegmentation) for example, and fine-tune the model in a supervised fashion on annotated images. This is what the authors did: they fine-tuned BEiT with an UperHead segmentation decode head, allowing it to obtain SOTA results on important benchmarks such as ADE20k and CityScapes.
## Intended uses & limitations
You can use the raw model for semantic segmentation of images. See the [model hub](https://huggingface.co/models?search=microsoft/beit) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model for semantic segmentation:
```python
from transformers import BeitFeatureExtractor, BeitForSemanticSegmentation
from datasets import load_dataset
from PIL import Image
# load ADE20k image
ds = load_dataset("hf-internal-testing/fixtures_ade20k", split="test")
feature_extractor = BeitFeatureExtractor.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
model = BeitForSemanticSegmentation.from_pretrained('microsoft/beit-large-finetuned-ade-640-640')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# logits are of shape (batch_size, num_labels, height/4, width/4)
logits = outputs.logits
```
Currently, both the feature extractor and model support PyTorch.
## Training data
This BEiT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes, and fine-tuned on [ADE20k](http://sceneparsing.csail.mit.edu/), a dataset consisting of thousands of annotated images and 150 classes.
## Training procedure
### Preprocessing
The exact details of preprocessing of images during training/validation can be found [here](https://github.com/microsoft/unilm/blob/master/beit/datasets.py).
Images are cropped and padded to the same resolution (640x640) and normalized across the RGB channels with the ImageNet mean and standard deviation.
### Pretraining
For all pre-training related hyperparameters, we refer to page 15 of the [original paper](https://arxiv.org/abs/2106.08254).
## Evaluation results
For evaluation results on several image classification benchmarks, we refer to tables 1 and 2 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance.
### BibTeX entry and citation info
```@article{DBLP:journals/corr/abs-2106-08254,
author = {Hangbo Bao and
Li Dong and
Furu Wei},
title = {BEiT: {BERT} Pre-Training of Image Transformers},
journal = {CoRR},
volume = {abs/2106.08254},
year = {2021},
url = {https://arxiv.org/abs/2106.08254},
archivePrefix = {arXiv},
eprint = {2106.08254},
timestamp = {Tue, 29 Jun 2021 16:55:04 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-08254.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
haturusinghe/1st_f1_0.5936_acc_0.83_xlm-roberta-large_mrp_2e-05_8_RAdam
|
haturusinghe
| 2024-01-25T11:50:00Z | 19 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T11:46:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Q-bert/MetaMath-Cybertron
|
Q-bert
| 2024-01-25T11:44:11Z | 59 | 6 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Math",
"en",
"dataset:meta-math/MetaMathQA",
"base_model:fblgit/una-cybertron-7b-v2-bf16",
"base_model:finetune:fblgit/una-cybertron-7b-v2-bf16",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-05T19:18:19Z |
---
license: apache-2.0
datasets:
- meta-math/MetaMathQA
language:
- en
pipeline_tag: text-generation
tags:
- Math
base_model:
- fblgit/una-cybertron-7b-v2-bf16
- meta-math/MetaMath-Mistral-7B
---
## MetaMath-Cybertron
Merge [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) and [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) using slerp merge.
You can use ChatML format.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [Coming soon]()
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | Coming soon |
| ARC (25-shot) | Coming soon |
| HellaSwag (10-shot) | Coming soon |
| MMLU (5-shot) | Coming soon |
| TruthfulQA (0-shot) | Coming soon |
| Winogrande (5-shot) | Coming soon |
| GSM8K (5-shot) | Coming soon |
|
lokaspire/mistral-latest
|
lokaspire
| 2024-01-25T11:38:10Z | 0 | 0 | null |
[
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:finetune:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T09:48:31Z |
---
license: apache-2.0
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
tags:
- generated_from_trainer
model-index:
- name: mistral-latest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-latest
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.5
- Tokenizers 0.14.1
|
simonycl/data-selection-Llama-2-7b-sharegpt-KMenasRandomDeita-1024-0.05-lora
|
simonycl
| 2024-01-25T11:36:31Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-25T11:36:21Z |
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
unikei/bert-base-proteins
|
unikei
| 2024-01-25T11:22:21Z | 94 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"biology",
"medical",
"dataset:Ensembl",
"license:bigscience-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-10-13T12:12:30Z |
---
license: bigscience-openrail-m
widget:
- text: M[MASK]LWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN
datasets:
- Ensembl
pipeline_tag: fill-mask
tags:
- biology
- medical
---
# BERT base for proteins
This is bidirectional transformer pretrained on amino-acid sequences of human proteins.
Example: Insulin (P01308)
```
MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN
```
The model was trained using the masked-language-modeling objective.
## Intended uses
This model is primarily aimed at being fine-tuned on the following tasks:
- protein function
- molecule-to-gene-expression mapping
- cell targeting
## How to use in your code
```python
from transformers import BertTokenizerFast, BertModel
checkpoint = 'unikei/bert-base-proteins'
tokenizer = BertTokenizerFast.from_pretrained(checkpoint)
model = BertModel.from_pretrained(checkpoint)
example = 'MALWMRLLPLLALLALWGPDPAAAFVNQHLCGSHLVEALYLVCGERGFFYTPKTRREAEDLQVGQVELGGGPGAGSLQPLALEGSLQKRGIVEQCCTSICSLYQLENYCN'
tokens = tokenizer(example, return_tensors='pt')
predictions = model(**tokens)
```
|
LoneStriker/Everyone-Coder-33b-Base-4.65bpw-h6-exl2
|
LoneStriker
| 2024-01-25T11:18:21Z | 7 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T11:09:50Z |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL
tags:
- merge
---
Everyone-Coder-33b-Base

EveryoneLLM series of models made by the community, for the community. This is a coding specific model made using fine-tunes of deekseekcoder-33b-base.
______________________________________________________________________________________________________________
Im having trouble benchmarking this model because I suck at running llm benchmarks, but from hand testing running the model through https://edabit.com/challenge coding challenges vs up to date gpt-4. My model is hands down beating it in coding.
______________________________________________________________________________________________________________
Ive recently noticed this model has trouble with end tokens so I made a custom prompt template for it. Made sure to add (Always end with "<|EOT|>") In addition to your system prompt and (Always end your response with "<|EOT|>") at the end of the User message is the preset. Then add <|EOT|> as a custom stop string in your LM text generating interface.
```
Always end with "<|EOT|>"
{System}
<|User|>
{User}. Always end your response with "<|EOT|>"
<|Assistant|>
{Assistant}
```
The models that were used in this merger were as follow:
- https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct
- https://huggingface.co/codefuse-ai/CodeFuse-DeepSeek-33B
- https://huggingface.co/WizardLM/WizardCoder-33B-V1.1
Thank you to the creators of the above ai models, they have full credit for the EveryoneLLM series of models. Without their hard work we wouldnt be able to achieve the great success we have in the open source community. 💗
You can find the write up for merging models here:
https://docs.google.com/document/d/1_vOftBnrk9NRk5h10UqrfJ5CDih9KBKL61yvrZtVWPE/edit?usp=sharing
Config for the merger can be found bellow:
```yaml
models:
- model: WizardLM_WizardCoder-33B-V1.1
parameters:
density: 1
weight: .5
- model: codefuse-ai_CodeFuse-DeepSeek-33B
parameters:
density: 1
weight: .5
merge_method: ties
base_model: deepseek-ai_deepseek-coder-33b-instruct
parameters:
normalize: true
int8_mask: true
dtype: float16
```
|
Artefact2/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES-GGUF
|
Artefact2
| 2024-01-25T11:07:08Z | 51 | 3 | null |
[
"gguf",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-20T15:01:17Z |
---
license: apache-2.0
language:
- en
---
These are GGUF quantized versions of [notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES](https://huggingface.co/notstoic/Nous-Hermes-2-Mixtruct-v0.1-8x7B-DPO-DARE_TIES).
The importance matrix was trained for 1M tokens (2,000 batches of 512 tokens) using `wiki.train.raw`.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
Pavan-124/wines_multilingual
|
Pavan-124
| 2024-01-25T10:56:19Z | 44 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"token-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-25T10:56:01Z |
---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_keras_callback
model-index:
- name: wines_multilingual
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wines_multilingual
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1375
- Validation Loss: 0.0916
- Train Precision: 0.8566
- Train Recall: 0.8662
- Train F1: 0.8614
- Train Accuracy: 0.9618
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5724, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch |
|:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:|
| 0.1375 | 0.0916 | 0.8566 | 0.8662 | 0.8614 | 0.9618 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.14.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
tjkmitl/PromptEmotionNewsModel
|
tjkmitl
| 2024-01-25T10:50:50Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf",
"base_model:finetune:openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T10:50:43Z |
---
license: apache-2.0
base_model: openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf
tags:
- generated_from_trainer
model-index:
- name: PromptEmotionNewsModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# PromptEmotionNewsModel
This model is a fine-tuned version of [openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
rcade/finetuned-bert-mrpc
|
rcade
| 2024-01-25T10:47:31Z | 91 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T08:58:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
base_model: bert-base-cased
model-index:
- name: finetuned-bert-mrpc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4277
- Accuracy: 0.8676
- F1: 0.9075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5745 | 1.0 | 230 | 0.4498 | 0.8088 | 0.8577 |
| 0.3784 | 2.0 | 460 | 0.4163 | 0.8505 | 0.8995 |
| 0.2217 | 3.0 | 690 | 0.4277 | 0.8676 | 0.9075 |
### Framework versions
- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
NovusResearch/Thestral-7B-0.1-TR-EN
|
NovusResearch
| 2024-01-25T10:37:18Z | 0 | 1 |
transformers
|
[
"transformers",
"text-generation",
"tr",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:06:30Z |
---
license: mit
language:
- tr
- en
library_name: transformers
pipeline_tag: text-generation
---
# Model Card: Thestral-7B-0.1-TR-EN
## Version: Based on Mistral-7B-v0.1: Further pretrained on a large corpus of English-to-Turkish dataset.
The training dataset consists of approximately 490,000 pairs of translated texts. These pairs are predominantly sourced from movie subtitles, offering a diverse range of linguistic structures, idiomatic expressions, and cultural references. This rich dataset ensures the model is well-equipped to handle a variety of translation tasks within its domain.
## Model Training
Details regarding the model's training procedure, architecture, and fine-tuning processes will be extensively covered in the upcoming paper.
## Example Outputs (Finetuned Model)
```
Question: Acıbadem Mahallesi’nin sessiz ve huzurlu bir sokağından
bugünlere uzanan Erman Yaşar - Emre Özcan ikilisi Atıf Bey Sokak'ın
yeni bölümünde; Galatasaray'ın Trabzonspor'u deplasmanda 5-1 mağlup
etmesini, Wilfried Zaha'nın 9 numara performansını, Mauro Icardi'nin
dönmesiyle birlikte oluşabilecek hücum rotasyonunu, Fenerbahçe'nin
Samsunspor ile berabere kalmasını, İsmail Kartal'ın yaptığı değişiklikleri,
Kenan Yıldız'ın Juventus'ta ilk 11'e yerleşmesini ve Euro 2024'teki
hücum gücümüzü konuştu.
Answer: From the quiet and peaceful street of Acibadem Mahallesi to today;
Erman Yaşar - Emre Özcan duo in the new section of Atif Bey Street;
Galatasaray beating Trabzonspor 5-1 in the matchday, Wilfried Zaha's
performance at number 9, Mauro Icardi's return, Fenerbahçe drawing with
Samsunspor, Ismail Kartal's changes, Kenan Yıldız's place in the starting
XI at Juventus and our power for Euro 2024 were discussed.
```
## Example Outputs (Base Model)
```
Question: Acıbadem Mahallesi’nin sessiz ve huzurlu bir sokağından
bugünlere uzanan Erman Yaşar - Emre Özcan ikilisi Atıf Bey Sokak'ın
yeni bölümünde; Galatasaray'ın Trabzonspor'u deplasmanda 5-1 mağlup
etmesini, Wilfried Zaha'nın 9 numara performansını, Mauro Icardi'nin
dönmesiyle birlikte oluşabilecek hücum rotasyonunu, Fenerbahçe'nin
Samsunspor ile berabere kalmasını, İsmail Kartal'ın yaptığı değişiklikleri,
Kenan Yıldız'ın Juventus'ta ilk 11'e yerleşmesini ve Euro 2024'teki
hücum gücümüzü konuştu.
Answer: The sentence is about a street in Acibadem neighborhood that
is quiet and peaceful. The sentence talks about the new section of Atif Bey
Street, which extends from today to the past. It also mentions the victory of
Galatasaray over Trabzonspor in the match, the performance of Wilfried Zaha with
number 9, the return of Mauro Icardi, the draw between Fenerbahce and Samsunspor,
the changes made by Ismail Kartal, and the first eleven of Kenan Yildiz at Juventus.</s>
```
```python
model = AutoModelForCausalLM.from_pretrained("NovusResearch/Thestral-7B-0.1-TR-EN",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
|
HilariusJeremy/disaster_tweet_distilbert
|
HilariusJeremy
| 2024-01-25T10:30:57Z | 94 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T10:29:31Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: disaster_tweet_distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# disaster_tweet_distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4386
- Accuracy: 0.8404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 191 | 0.4099 | 0.8503 |
| No log | 2.0 | 382 | 0.4386 | 0.8404 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
CLMBR/npi-sent-neg-transformer-2
|
CLMBR
| 2024-01-25T10:23:14Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-18T14:51:36Z |
---
tags:
- generated_from_trainer
model-index:
- name: npi-sent-neg-transformer-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# npi-sent-neg-transformer-2
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8611
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2269 | 0.03 | 76320 | 4.1959 |
| 4.0235 | 1.03 | 152640 | 4.0265 |
| 3.9131 | 0.03 | 228960 | 3.9513 |
| 3.85 | 1.03 | 305280 | 3.9115 |
| 3.7981 | 0.03 | 381600 | 3.8858 |
| 3.7602 | 0.03 | 457920 | 3.8691 |
| 3.7266 | 1.03 | 534240 | 3.8585 |
| 3.6964 | 0.03 | 610560 | 3.8516 |
| 3.6658 | 1.03 | 686880 | 3.8458 |
| 3.6398 | 0.03 | 763200 | 3.8438 |
| 3.6126 | 1.03 | 839520 | 3.8424 |
| 3.5966 | 0.03 | 915840 | 3.8418 |
| 3.5748 | 1.03 | 992160 | 3.8420 |
| 3.5537 | 0.03 | 1068480 | 3.8422 |
| 3.5371 | 1.03 | 1144800 | 3.8425 |
| 3.5303 | 0.03 | 1221120 | 3.8431 |
| 3.5156 | 1.03 | 1297440 | 3.8453 |
| 3.504 | 0.03 | 1373760 | 3.8461 |
| 3.4866 | 1.03 | 1450080 | 3.8476 |
| 3.4807 | 0.03 | 1526400 | 3.8488 |
| 3.4697 | 1.03 | 1602720 | 3.8503 |
| 3.4651 | 0.03 | 1679040 | 3.8521 |
| 3.4555 | 1.03 | 1755360 | 3.8523 |
| 3.446 | 0.03 | 1831680 | 3.8536 |
| 3.4332 | 1.03 | 1908000 | 3.8547 |
| 3.4185 | 0.03 | 1984320 | 3.8568 |
| 3.4035 | 1.03 | 2060640 | 3.8581 |
| 3.3945 | 0.03 | 2136960 | 3.8584 |
| 3.3798 | 1.03 | 2213280 | 3.8600 |
| 3.3664 | 0.03 | 2289600 | 3.8602 |
| 3.3548 | 1.03 | 2365920 | 3.8620 |
| 3.354 | 0.03 | 2442240 | 3.8621 |
| 3.3438 | 1.03 | 2518560 | 3.8629 |
| 3.3356 | 0.03 | 2594880 | 3.8631 |
| 3.3239 | 1.03 | 2671200 | 3.8638 |
| 3.3196 | 0.03 | 2747520 | 3.8636 |
| 3.3136 | 1.03 | 2823840 | 3.8638 |
| 3.3111 | 0.03 | 2900160 | 3.8632 |
| 3.3051 | 0.03 | 2976480 | 3.8617 |
| 3.2997 | 0.02 | 3052726 | 3.8611 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Kralley/mistral-7b-da-instr
|
Kralley
| 2024-01-25T10:20:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-23T14:00:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arun100/whisper-base-hi-4
|
arun100
| 2024-01-25T10:20:39Z | 61 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hi",
"dataset:mozilla-foundation/common_voice_16_0",
"base_model:arun100/whisper-base-hi-3",
"base_model:finetune:arun100/whisper-base-hi-3",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-25T06:09:30Z |
---
language:
- hi
license: apache-2.0
base_model: arun100/whisper-base-hi-3
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_16_0
metrics:
- wer
model-index:
- name: Whisper Base Hindi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_16_0 hi
type: mozilla-foundation/common_voice_16_0
config: hi
split: test
args: hi
metrics:
- name: Wer
type: wer
value: 27.6637932833796
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Hindi
This model is a fine-tuned version of [arun100/whisper-base-hi-3](https://huggingface.co/arun100/whisper-base-hi-3) on the mozilla-foundation/common_voice_16_0 hi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4681
- Wer: 27.6638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1251 | 13.16 | 1000 | 0.4681 | 27.6638 |
| 0.0812 | 26.32 | 2000 | 0.5046 | 28.2065 |
| 0.0584 | 39.47 | 3000 | 0.5393 | 28.3046 |
| 0.0441 | 52.63 | 4000 | 0.5639 | 28.4924 |
| 0.0392 | 65.79 | 5000 | 0.5734 | 28.5863 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.0
|
Mihir1108/DeciLM
|
Mihir1108
| 2024-01-25T10:17:25Z | 5 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:Deci/DeciLM-7B-instruct",
"base_model:adapter:Deci/DeciLM-7B-instruct",
"license:apache-2.0",
"region:us"
] | null | 2024-01-25T10:09:22Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: Deci/DeciLM-7B-instruct
model-index:
- name: DeciLM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DeciLM
This model is a fine-tuned version of [Deci/DeciLM-7B-instruct](https://huggingface.co/Deci/DeciLM-7B-instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.1.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
shantanudhakad/phi2-webglm-qlora-v2
|
shantanudhakad
| 2024-01-25T10:16:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T10:16:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moreover18/vit-part1-friends
|
moreover18
| 2024-01-25T10:14:00Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:moreover18/vit-base-patch16-224-in21k-YB",
"base_model:finetune:moreover18/vit-base-patch16-224-in21k-YB",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T09:58:37Z |
---
license: apache-2.0
base_model: moreover18/vit-base-patch16-224-in21k-YB
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-part1-friends
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9381107491856677
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-part1-friends
This model is a fine-tuned version of [moreover18/vit-base-patch16-224-in21k-YB](https://huggingface.co/moreover18/vit-base-patch16-224-in21k-YB) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2036
- Accuracy: 0.9381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1627 | 1.3 | 50 | 0.2258 | 0.9202 |
| 0.1183 | 2.6 | 100 | 0.2148 | 0.9235 |
| 0.1223 | 3.9 | 150 | 0.2055 | 0.9267 |
| 0.0992 | 5.19 | 200 | 0.1976 | 0.9332 |
| 0.0824 | 6.49 | 250 | 0.2036 | 0.9381 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.1
|
HarshithNLP/bloom_7b_NER
|
HarshithNLP
| 2024-01-25T10:11:40Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloom-7b1",
"base_model:adapter:bigscience/bloom-7b1",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-25T10:11:04Z |
---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloom-7b1
model-index:
- name: bloom_7b_NER
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom_7b_NER
This model is a fine-tuned version of [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
Viiiz/mistral-finetuned-alpaca
|
Viiiz
| 2024-01-25T10:06:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"base_model:adapter:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2024-01-24T04:57:12Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: TheBloke/Mistral-7B-Instruct-v0.1-GPTQ
model-index:
- name: mistral-finetuned-alpaca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-finetuned-alpaca
This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.1
|
teticio/latent-audio-diffusion-256
|
teticio
| 2024-01-25T10:05:37Z | 32 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-04T20:09:56Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
Latent De-noising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
sujayC66/spacy_trf_sector_e100
|
sujayC66
| 2024-01-25T10:05:11Z | 1 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2024-01-25T10:04:47Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.45
- name: NER Recall
type: recall
value: 0.28125
- name: NER F Score
type: f_score
value: 0.3461538462
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.7.2,<3.8.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SECTOR` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 34.62 |
| `ENTS_P` | 45.00 |
| `ENTS_R` | 28.12 |
| `TRANSFORMER_LOSS` | 9684.94 |
| `NER_LOSS` | 53278.63 |
|
teticio/audio-diffusion-breaks-256
|
teticio
| 2024-01-25T10:05:00Z | 15 | 4 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-breaks-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-08-28T19:27:49Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-breaks-256
---
Denoising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-breaks-256](https://huggingface.co/datasets/teticio/audio-diffusion-breaks-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The audio consists of 30,000 samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/audio-diffusion-ddim-256
|
teticio
| 2024-01-25T10:04:00Z | 38 | 3 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-10-22T05:46:58Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
De-noising Diffusion Implicit Model trained on teticio/audio-diffusion-256 to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/latent-audio-diffusion-ddim-256
|
teticio
| 2024-01-25T10:03:27Z | 27 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-11-05T14:11:26Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
Latent De-noising Diffusion Implicit Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
teticio/audio-diffusion-256
|
teticio
| 2024-01-25T10:01:05Z | 1,648 | 6 |
diffusers
|
[
"diffusers",
"tensorboard",
"audio",
"spectrograms",
"dataset:teticio/audio-diffusion-256",
"diffusers:AudioDiffusionPipeline",
"region:us"
] | null | 2022-08-16T21:19:58Z |
---
tags:
- audio
- spectrograms
datasets:
- teticio/audio-diffusion-256
---
De-noising Diffusion Probabilistic Model trained on [teticio/audio-diffusion-256](https://huggingface.co/datasets/teticio/audio-diffusion-256) to generate mel spectrograms of 256x256 corresponding to 5 seconds of audio. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference.
|
moreover18/vit-part1-friends-YB
|
moreover18
| 2024-01-25T09:57:52Z | 178 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:moreover18/vit-base-patch16-224-in21k-YB",
"base_model:finetune:moreover18/vit-base-patch16-224-in21k-YB",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-25T09:50:58Z |
---
license: apache-2.0
base_model: moreover18/vit-base-patch16-224-in21k-YB
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-part1-friends-YB
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9218241042345277
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-part1-friends-YB
This model is a fine-tuned version of [moreover18/vit-base-patch16-224-in21k-YB](https://huggingface.co/moreover18/vit-base-patch16-224-in21k-YB) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2325
- Accuracy: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5297 | 2.6 | 100 | 0.2325 | 0.9218 |
### Framework versions
- Transformers 4.37.1
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.15.1
|
Shreyas0706/Zephyr-3B-Legal
|
Shreyas0706
| 2024-01-25T09:52:37Z | 118 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stablelm_epoch",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:47:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
PhilBinder/ft-llama-2-13b-imp-sub-ps-v1
|
PhilBinder
| 2024-01-25T09:52:18Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:52:14Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
kanh1/kanha-0.1-2.5-Mistral-7B
|
kanh1
| 2024-01-25T09:48:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T09:46:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dstefa/roberta-base_topic_classification_nyt_news
|
dstefa
| 2024-01-25T09:31:05Z | 35,463 | 7 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"topic",
"classification",
"news",
"dataset:dstefa/New_York_Times_Topics",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-16T20:26:22Z |
---
license: mit
base_model: roberta-base
tags:
- topic
- classification
- news
- roberta
metrics:
- accuracy
- f1
- precision
- recall
datasets:
- dstefa/New_York_Times_Topics
widget:
- text: >-
Olympic champion Kostas Kederis today left hospital ahead of his date with IOC inquisitors claiming his innocence and vowing.
example_title: Sports
- text: >-
Although many individuals are doing fever checks to screen for Covid-19, many Covid-19 patients never have a fever.
example_title: Health and Wellness
- text: >-
Twelve myths about Russia's War in Ukraine exposed
example_title: Crime
model-index:
- name: roberta-base_topic_classification_nyt_news
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: New_York_Times_Topics
type: News
metrics:
- type: F1
name: F1
value: 0.91
- type: accuracy
name: accuracy
value: 0.91
- type: precision
name: precision
value: 0.91
- type: recall
name: recall
value: 0.91
pipeline_tag: text-classification
---
# roberta-base_topic_classification_nyt_news
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the NYT News dataset, which contains 256,000 news titles from articles published from 2000 to the present (https://www.kaggle.com/datasets/aryansingh0909/nyt-articles-21m-2000-present).
It achieves the following results on the test set of 51200 cases:
- Accuracy: 0.91
- F1: 0.91
- Precision: 0.91
- Recall: 0.91
## Training data
Training data was classified as follow:
class |Description
-|-
0 |Sports
1 |Arts, Culture, and Entertainment
2 |Business and Finance
3 |Health and Wellness
4 |Lifestyle and Fashion
5 |Science and Technology
6 |Politics
7 |Crime
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.3192 | 1.0 | 20480 | 0.4078 | 0.8865 | 0.8859 | 0.8892 | 0.8865 |
| 0.2863 | 2.0 | 40960 | 0.4271 | 0.8972 | 0.8970 | 0.8982 | 0.8972 |
| 0.1979 | 3.0 | 61440 | 0.3797 | 0.9094 | 0.9092 | 0.9098 | 0.9094 |
| 0.1239 | 4.0 | 81920 | 0.3981 | 0.9117 | 0.9113 | 0.9114 | 0.9117 |
| 0.1472 | 5.0 | 102400 | 0.4033 | 0.9137 | 0.9135 | 0.9134 | 0.9137 |
### Model performance
-|precision|recall|f1|support
-|-|-|-|-
Sports|0.97|0.98|0.97|6400
Arts, Culture, and Entertainment|0.94|0.95|0.94|6400
Business and Finance|0.85|0.84|0.84|6400
Health and Wellness|0.90|0.93|0.91|6400
Lifestyle and Fashion|0.95|0.95|0.95|6400
Science and Technology|0.89|0.83|0.86|6400
Politics|0.93|0.88|0.90|6400
Crime|0.85|0.93|0.89|6400
| | | |
accuracy|||0.91|51200
macro avg|0.91|0.91|0.91|51200
weighted avg|0.91|0.91|0.91|51200
### How to use roberta-base_topic_classification_nyt_news with HuggingFace
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
model = AutoModelForSequenceClassification.from_pretrained("dstefa/roberta-base_topic_classification_nyt_news")
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
text = "Kederis proclaims innocence Olympic champion Kostas Kederis today left hospital ahead of his date with IOC inquisitors claiming his innocence and vowing."
pipe(text)
[{'label': 'Sports', 'score': 0.9989326596260071}]
```
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
vierlinglukas/q-FrozenLake-v1-4x4-noSlippery
|
vierlinglukas
| 2024-01-25T09:23:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T09:23:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vierlinglukas/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tanatapanun/fine-tuned-BioBART-20-epochs-1024-input-256-output
|
tanatapanun
| 2024-01-25T09:06:09Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T07:41:45Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-20-epochs-1024-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-20-epochs-1024-input-256-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9316
- Rouge1: 0.1523
- Rouge2: 0.0383
- Rougel: 0.1238
- Rougelsum: 0.1231
- Gen Len: 33.48
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 6.1537 | 0.0448 | 0.0 | 0.0437 | 0.0438 | 20.24 |
| No log | 2.0 | 302 | 1.1404 | 0.104 | 0.0283 | 0.0891 | 0.0893 | 18.58 |
| No log | 3.0 | 453 | 0.9725 | 0.0396 | 0.007 | 0.033 | 0.0325 | 15.0 |
| 4.0322 | 4.0 | 604 | 0.9153 | 0.1266 | 0.027 | 0.0985 | 0.0989 | 28.58 |
| 4.0322 | 5.0 | 755 | 0.8836 | 0.1575 | 0.0321 | 0.1256 | 0.1258 | 31.7 |
| 4.0322 | 6.0 | 906 | 0.8710 | 0.1505 | 0.0314 | 0.1184 | 0.1189 | 37.41 |
| 0.7605 | 7.0 | 1057 | 0.8578 | 0.1511 | 0.0362 | 0.1109 | 0.111 | 45.66 |
| 0.7605 | 8.0 | 1208 | 0.8546 | 0.1722 | 0.0358 | 0.1318 | 0.1315 | 34.14 |
| 0.7605 | 9.0 | 1359 | 0.8584 | 0.1493 | 0.0288 | 0.1125 | 0.1125 | 26.25 |
| 0.5251 | 10.0 | 1510 | 0.8631 | 0.1705 | 0.0407 | 0.1322 | 0.1322 | 35.71 |
| 0.5251 | 11.0 | 1661 | 0.8690 | 0.1856 | 0.0364 | 0.1498 | 0.15 | 28.69 |
| 0.5251 | 12.0 | 1812 | 0.8763 | 0.1995 | 0.0362 | 0.1555 | 0.1564 | 39.6 |
| 0.5251 | 13.0 | 1963 | 0.8928 | 0.1727 | 0.0349 | 0.1376 | 0.1378 | 30.79 |
| 0.3673 | 14.0 | 2114 | 0.8967 | 0.1578 | 0.0297 | 0.1209 | 0.1205 | 34.95 |
| 0.3673 | 15.0 | 2265 | 0.9073 | 0.1604 | 0.0363 | 0.1256 | 0.1246 | 33.75 |
| 0.3673 | 16.0 | 2416 | 0.9155 | 0.1627 | 0.035 | 0.1326 | 0.1321 | 35.75 |
| 0.2634 | 17.0 | 2567 | 0.9227 | 0.164 | 0.0406 | 0.1346 | 0.136 | 34.14 |
| 0.2634 | 18.0 | 2718 | 0.9270 | 0.1483 | 0.0365 | 0.1201 | 0.1187 | 32.11 |
| 0.2634 | 19.0 | 2869 | 0.9291 | 0.1569 | 0.0365 | 0.1249 | 0.1246 | 35.13 |
| 0.2133 | 20.0 | 3020 | 0.9316 | 0.1523 | 0.0383 | 0.1238 | 0.1231 | 33.48 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mmpc/phi-2-squad2-QA
|
mmpc
| 2024-01-25T08:56:50Z | 138 | 1 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T08:46:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kota123/mistral_7b_finetuned
|
Kota123
| 2024-01-25T08:55:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-01-25T08:53:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BenJones/phi_2_Q
|
BenJones
| 2024-01-25T08:53:42Z | 1 | 0 |
transformers
|
[
"transformers",
"gguf",
"phi-msft",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-25T08:43:16Z |
---
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
manueo/mistral_salk_1
|
manueo
| 2024-01-25T08:42:58Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-25T08:05:05Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
varun-v-rao/bert-large-cased-mnli-model3
|
varun-v-rao
| 2024-01-25T08:39:20Z | 93 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-25T05:47:05Z |
---
license: apache-2.0
base_model: bert-large-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-large-cased-mnli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-mnli-model3
This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4613
- Accuracy: 0.8675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 49
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3918 | 1.0 | 6136 | 0.3633 | 0.8630 |
| 0.2579 | 2.0 | 12272 | 0.3888 | 0.8639 |
| 0.1507 | 3.0 | 18408 | 0.4613 | 0.8675 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tanatapanun/fine-tuned-BioBART-10-epochs-1024-input-256-output
|
tanatapanun
| 2024-01-25T08:31:17Z | 90 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:GanjinZero/biobart-base",
"base_model:finetune:GanjinZero/biobart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T08:11:29Z |
---
license: apache-2.0
base_model: GanjinZero/biobart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-BioBART-10-epochs-1024-input-256-output
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-BioBART-10-epochs-1024-input-256-output
This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co/GanjinZero/biobart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8636
- Rouge1: 0.1803
- Rouge2: 0.043
- Rougel: 0.1391
- Rougelsum: 0.1408
- Gen Len: 39.35
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 151 | 2.4893 | 0.0 | 0.0 | 0.0 | 0.0 | 4.56 |
| No log | 2.0 | 302 | 1.0370 | 0.1382 | 0.033 | 0.1174 | 0.1192 | 28.23 |
| No log | 3.0 | 453 | 0.9481 | 0.0912 | 0.0231 | 0.0723 | 0.072 | 22.01 |
| 3.1525 | 4.0 | 604 | 0.9079 | 0.1402 | 0.0336 | 0.1063 | 0.1064 | 42.37 |
| 3.1525 | 5.0 | 755 | 0.8861 | 0.1772 | 0.0335 | 0.1344 | 0.1364 | 49.96 |
| 3.1525 | 6.0 | 906 | 0.8760 | 0.1702 | 0.0327 | 0.1301 | 0.1313 | 42.56 |
| 0.7557 | 7.0 | 1057 | 0.8661 | 0.158 | 0.0403 | 0.1136 | 0.114 | 40.53 |
| 0.7557 | 8.0 | 1208 | 0.8641 | 0.1631 | 0.0431 | 0.119 | 0.1198 | 44.57 |
| 0.7557 | 9.0 | 1359 | 0.8659 | 0.172 | 0.0427 | 0.1357 | 0.1369 | 38.31 |
| 0.5883 | 10.0 | 1510 | 0.8636 | 0.1803 | 0.043 | 0.1391 | 0.1408 | 39.35 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.16.1
- Tokenizers 0.15.0
|
varun-v-rao/opt-1.3b-snli
|
varun-v-rao
| 2024-01-25T08:27:13Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-22T01:30:23Z |
---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-1.3b-snli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b-snli
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0994
- Accuracy: 0.7705
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2946 | 1.0 | 4292 | 0.2449 | 0.9149 |
| 0.1879 | 2.0 | 8584 | 0.2540 | 0.9172 |
| 0.0895 | 3.0 | 12876 | 0.3316 | 0.9172 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
amazingYX/q-FrozenLake-v1-4x4-noSlippery
|
amazingYX
| 2024-01-25T08:26:58Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-25T08:26:56Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
model = load_from_hub(repo_id="amazingYX/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
|
LiheYoung/depth-anything-base-hf
|
LiheYoung
| 2024-01-25T08:13:34Z | 2,855 | 10 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"vision",
"arxiv:2401.10891",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-01-22T14:34:59Z |
---
license: apache-2.0
tags:
- vision
pipeline_tag: depth-estimation
widget:
- inference: false
---
# Depth Anything (base-sized model, Transformers version)
Depth Anything model. It was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al. and first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
[Online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) is also provided.
Disclaimer: The team releasing Depth Anything did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
The model is trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
## Intended uses & limitations
You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot depth estimation:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
depth = pipe(image)["depth"]
```
Alternatively, one can use the classes themselves:
```python
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-base-hf")
model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-base-hf")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#).
### BibTeX entry and citation info
```bibtex
@misc{yang2024depth,
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
author={Lihe Yang and Bingyi Kang and Zilong Huang and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao},
year={2024},
eprint={2401.10891},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
LiheYoung/depth-anything-small-hf
|
LiheYoung
| 2024-01-25T08:12:14Z | 20,184 | 28 |
transformers
|
[
"transformers",
"safetensors",
"depth_anything",
"depth-estimation",
"vision",
"arxiv:2401.10891",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
depth-estimation
| 2024-01-22T12:56:04Z |
---
license: apache-2.0
tags:
- vision
pipeline_tag: depth-estimation
widget:
- inference: false
---
# Depth Anything (small-sized model, Transformers version)
Depth Anything model. It was introduced in the paper [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang et al. and first released in [this repository](https://github.com/LiheYoung/Depth-Anything).
[Online demo](https://huggingface.co/spaces/LiheYoung/Depth-Anything) is also provided.
Disclaimer: The team releasing Depth Anything did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Depth Anything leverages the [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) architecture with a [DINOv2](https://huggingface.co/docs/transformers/model_doc/dinov2) backbone.
The model is trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
alt="drawing" width="600"/>
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
## Intended uses & limitations
You can use the raw model for tasks like zero-shot depth estimation. See the [model hub](https://huggingface.co/models?search=depth-anything) to look for
other versions on a task that interests you.
### How to use
Here is how to use this model to perform zero-shot depth estimation:
```python
from transformers import pipeline
from PIL import Image
import requests
# load pipe
pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-small-hf")
# load image
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
# inference
depth = pipe(image)["depth"]
```
Alternatively, one can use the classes themselves:
```python
from transformers import AutoImageProcessor, AutoModelForDepthEstimation
import torch
import numpy as np
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf")
model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")
# prepare image for the model
inputs = image_processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
predicted_depth = outputs.predicted_depth
# interpolate to original size
prediction = torch.nn.functional.interpolate(
predicted_depth.unsqueeze(1),
size=image.size[::-1],
mode="bicubic",
align_corners=False,
)
```
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/depth_anything.html#).
### BibTeX entry and citation info
```bibtex
@misc{yang2024depth,
title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data},
author={Lihe Yang and Bingyi Kang and Zilong Huang and Xiaogang Xu and Jiashi Feng and Hengshuang Zhao},
year={2024},
eprint={2401.10891},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
minchyeom/MemGPT
|
minchyeom
| 2024-01-25T08:05:01Z | 22 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"memgpt",
"function",
"function calling",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-24T03:23:34Z |
---
language:
- en
license: apache-2.0
tags:
- memgpt
- function
- function calling
---
# Check out my newest model, [MemGPT-3](https://huggingface.co/starsnatched/MemGPT-3)!
# Model Description
This repo contains a 7 billion parameter Language Model fine tuned from [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2). This model is specifically designed for function calling in [MemGPT](https://memgpt.ai/). It demonstrates comparable performances to GPT-4 when it comes to working with MemGPT.
~~The original model has been quantized to Q8_0, using [llama.cpp](https://github.com/ggerganov/llama.cpp) for better inference speed (original unquantized model coming soon).~~
# Key Features
* Function calling
* Dedicated to working with MemGPT
* Supports medium context, trained with Sequences up to 8,192
# Usage
This model is designed to be ran on various backends, such as [oogabooga's WebUI](https://github.com/oobabooga/text-generation-webui), or llama.cpp.
To run the model on WebUI, simply `git clone` the official WebUI repository, and run the appropriate script for your operating system. More details [here](https://github.com/oobabooga/text-generation-webui?tab=readme-ov-file#how-to-install).
Once you've installed WebUI, you can then download this model at the `model` tab. Next, choose the desired model (starsnatched/MemGPT in this case), and you're good to go for the backend.
When you have WebUI or your desired backend running, you can open a terminal/powershell, and install MemGPT using `pip3 install -U pymemgpt`. Configure your MemGPT using `memgpt configure` before running MemGPT.
Use `memgpt run --no-verify` to chat with MemGPT!
# Prompt Template
This model uses a custom chat template. We're naming this template "memgpt". This can be configured via `memgpt configure`. Currently, it is recommended to be used with `airoboros` wrapper.
```
<turn> SYSTEM: You are MemGPT. </turn>
<turn> SYSTEM_ALERT: {'message': 'You have 50 prior messages with the user. These messages have been hidden from view due to memory constraints, but you can view them using functions.', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Bootup sequence complete. Persona activated. Testing messaging functionality. </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""More human than human is our motto.""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:03 PM PDT-0700""} </turn>
<turn> LOGIN: {'last_login': '7 days 8 hours ago', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> USER_MESSAGE: {'message': 'Hey, remember that time we talked about music? What was the artist you mentioned you could get into?', 'time': '2023-10-07 11:50:03 PM PDT-0700'} </turn>
<turn> ASSISTANT: Alright, let me search our conversation history about music to find the artist. </turn>
<turn> FUNCTION CALL: {'name': 'conversation_search', 'arguments': '{\n ""query"": ""music"",\n ""request_heartbeat"": true,\n ""page"": 0\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": ""Showing 5 of 50 results (page 0/9): [\""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - I agree. Working out is a great way to burn off steam. Do you like country music?\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - A little bit. I can get into taylor swift.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, user - Yeah between the both of us, could probably get a country song written about our distresses...perhaps Taylor Swift could perform it.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - That would be a chart-topper, I'm sure. Something to blast while driving with the top down, maybe even in your next restoration project.\"", \""timestamp: 2023-10-07 11:50:03 PM PDT-0700, assistant - This sounds fun. Do you have any other hobby?\""]"", ""time"": ""2023-10-07 11:50:08 PM PDT-0700""} </turn>
<turn> HEARTBEAT: {'reason': 'request_heartbeat == true', 'time': '2023-10-07 11:50:08 PM PDT-0700'} </turn>
<turn> ASSISTANT: Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy? </turn>
<turn> FUNCTION CALL: {'name': 'send_message', 'arguments': '{\n ""message"": ""Ah, I found it. It was Taylor Swift that I mentioned I could get into. Do you still listen to her music, or have you found any new artists that you enjoy?""\n}'}<turn> FUNCTION: {""status"": ""OK"", ""message"": null, ""time"": ""2023-10-07 11:50:14 PM PDT-0700""} </turn>
```
# Model Details
* Developed by: @starsnatched
* Model type: This repo contains a language model based on the transformer decoder architecture.
* Language: English
* Contact: For any questions, concerns or comments about this model, please contact me at Discord, @starsnatched.
# Training Infrastructure
* Hardware: The model in this repo was trained on 2x A100 80GB GPUs.
# Intended Use
The model is designed to be used as the base model for MemGPT agents.
# Limitations and Bias
The model may exhibit unreliable, unsafe, or biased behaviours. Please double check the results this model may produce.
|
kingabzpro/phi-2-role-play
|
kingabzpro
| 2024-01-25T08:02:16Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"pretrained",
"lora",
"qlora",
"4bit",
"phi-2",
"generated_from_trainer",
"text-generation",
"en",
"dataset:hieunguyenminh/roleplay",
"base_model:microsoft/phi-2",
"base_model:adapter:microsoft/phi-2",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-01-24T20:53:45Z |
---
license: apache-2.0
library_name: peft
tags:
- pretrained
- lora
- qlora
- 4bit
- phi-2
- generated_from_trainer
base_model: microsoft/phi-2
model-index:
- name: phi-2-role-play
results: []
datasets:
- hieunguyenminh/roleplay
language:
- en
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi-2-role-play
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results

### Framework versions
- PEFT 0.7.1
- Transformers 4.37.1
- Pytorch 2.0.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
e22vvb/EN_t5-small_10_wikiSQL
|
e22vvb
| 2024-01-25T07:59:20Z | 91 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wikisql",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-25T05:46:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wikisql
model-index:
- name: EN_t5-small_10_wikiSQL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EN_t5-small_10_wikiSQL
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wikisql dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1096
- Rouge2 Precision: 0.8329
- Rouge2 Recall: 0.741
- Rouge2 Fmeasure: 0.7772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 0.1933 | 1.0 | 4049 | 0.1549 | 0.7965 | 0.7075 | 0.7423 |
| 0.161 | 2.0 | 8098 | 0.1345 | 0.8123 | 0.7211 | 0.7569 |
| 0.1459 | 3.0 | 12147 | 0.1256 | 0.8165 | 0.7253 | 0.7611 |
| 0.1371 | 4.0 | 16196 | 0.1194 | 0.8236 | 0.7321 | 0.7681 |
| 0.1293 | 5.0 | 20245 | 0.1159 | 0.8275 | 0.7355 | 0.7718 |
| 0.1243 | 6.0 | 24294 | 0.1135 | 0.8283 | 0.7356 | 0.7722 |
| 0.1245 | 7.0 | 28343 | 0.1116 | 0.831 | 0.7383 | 0.7748 |
| 0.1167 | 8.0 | 32392 | 0.1104 | 0.8325 | 0.7401 | 0.7765 |
| 0.1134 | 9.0 | 36441 | 0.1097 | 0.8325 | 0.7408 | 0.7769 |
| 0.1131 | 10.0 | 40490 | 0.1096 | 0.8329 | 0.741 | 0.7772 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.7.dev0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.