modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
nyunai/OpenHathi-7B-Hi-v0.1-Base-AWQ-wikitext
|
nyunai
| 2024-03-06T10:14:20Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-06T09:36:44Z |
---
library_name: transformers
tags: []
---
## Model Description
This model is a compressed version of the OpenHathi-7B-Hi base model, optimized for chat format text data in the Hindi language. It has been quantized using the AWQ technique with calibration data from the samvaad-hi-v1 dataset. The compression process aims to reduce the model size while preserving its performance on chat-oriented tasks.
## Model Usage:
The compressed model can be utilized for various natural language processing tasks, particularly those involving chat format text data in Hindi. It can be deployed in conversational AI systems, chatbots, or any application requiring efficient processing of chat-style interactions.
## Performance Metrics:
- **Model Size:** 4.15 GB
- **Compression Technique:** AWQ
- **Calibration Data:** [wikitext](https://huggingface.co/datasets/wikitext) dataset
- **Tokenization Model Size:** 968 KB
- **Performance:** The compressed model's performance has been evaluated on various chat-oriented tasks, demonstrating efficiency in handling conversational text data while maintaining comparable performance to the original base model.
**Limitations:** While the compressed model offers significant reductions in size, there may be slight trade-offs in performance compared to the full-sized base model. It may not perform optimally on tasks outside the scope of chat-oriented text data in Hindi.
|
Leelakrish/my-pet-lion-xzg
|
Leelakrish
| 2024-03-06T10:12:19Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-06T10:10:10Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Lion-XZG Dreambooth model trained by Leelakrish following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21BRS1638
Sample pictures of this concept:

|
Hemg/Brain-Tumor-Classification
|
Hemg
| 2024-03-06T10:11:06Z | 38 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-06T05:51:46Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Brain-Tumor-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain-Tumor-Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0872
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2074 | 1.0 | 44 | 0.8060 | 0.8128 |
| 0.4897 | 2.0 | 88 | 0.3008 | 0.9274 |
| 0.2462 | 3.0 | 132 | 0.2464 | 0.9331 |
| 0.1937 | 4.0 | 176 | 0.1918 | 0.9502 |
| 0.1523 | 5.0 | 220 | 0.1699 | 0.9502 |
| 0.1371 | 6.0 | 264 | 0.1372 | 0.9644 |
| 0.1104 | 7.0 | 308 | 0.1121 | 0.9708 |
| 0.1097 | 8.0 | 352 | 0.1220 | 0.9651 |
| 0.1015 | 9.0 | 396 | 0.1053 | 0.9737 |
| 0.0841 | 10.0 | 440 | 0.1142 | 0.9708 |
| 0.0839 | 11.0 | 484 | 0.1073 | 0.9708 |
| 0.0771 | 12.0 | 528 | 0.1156 | 0.9665 |
| 0.074 | 13.0 | 572 | 0.1203 | 0.9644 |
| 0.0652 | 14.0 | 616 | 0.0706 | 0.9858 |
| 0.0694 | 15.0 | 660 | 0.0984 | 0.9744 |
| 0.0596 | 16.0 | 704 | 0.0872 | 0.9758 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
slukas99/tex_inv_af_dress
|
slukas99
| 2024-03-06T10:07:28Z | 10 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T08:47:39Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - slukas99/tex_inv_af_dress
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
YMKiii/output1
|
YMKiii
| 2024-03-06T10:04:15Z | 19 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T09:17:04Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2
inference: true
instance_prompt: interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - YMKiii/output1
This is a dreambooth model derived from stabilityai/stable-diffusion-2. The weights were trained on interior design using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
hellie/newsroommodel
|
hellie
| 2024-03-06T10:02:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-06T10:01:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mii-llm/maestrale-chat-v0.3-beta-sft
|
mii-llm
| 2024-03-06T10:00:53Z | 14 | 2 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"sft",
"it",
"chatml",
"axolotl",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-09T09:26:06Z |
---
language:
- it
license: cc-by-nc-4.0
tags:
- sft
- it
- mistral
- chatml
- axolotl
prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
<|im_start|>assistant
model-index:
- name: maestrale-chat-v0.3-beta
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/dgSNbTl.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat beta ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- **Fine-Tuning**: SFT performed on convs/instructions for three epochs.
**v0.3**
- Function calling
- Reduced default system prompt to avoid wasting tokens (pre-alignment)
This model uses ChatML prompt format:
```
<|im_start|>system
Sei un assistente utile.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Usage:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.3-beta")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.3-beta", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
messages = [
{"role": "system", "content": "Sei un assistente utile."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad(), torch.backends.cuda.sdp_kernel(
enable_flash=True,
enable_math=False,
enable_mem_efficient=False
):
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
```
## Intended uses & limitations
It's a beta sft version, but it's not `aligned`. It's a first test. We are working on alignment data and evals.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
nyunai/OpenHathi-7B-Hi-v0.1-Base-AWQ-samvaad-hi-v1-chat-format
|
nyunai
| 2024-03-06T10:00:46Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-06T09:34:58Z |
---
library_name: transformers
tags: []
---
## Model Description
This model is a compressed version of the OpenHathi-7B-Hi base model, optimized for chat format text data in the Hindi language. It has been quantized using the AWQ technique with calibration data from the samvaad-hi-v1 dataset. The compression process aims to reduce the model size while preserving its performance on chat-oriented tasks.
## Model Usage:
The compressed model can be utilized for various natural language processing tasks, particularly those involving chat format text data in Hindi. It can be deployed in conversational AI systems, chatbots, or any application requiring efficient processing of chat-style interactions.
## Performance Metrics:
- **Model Size:** 4.15 GB
- **Compression Technique:** AWQ
- **Calibration Data:** [samvaad-hi-v1 chat format](https://huggingface.co/datasets/shwubham/samvaad-hi-v1-chat-format) dataset
- **Tokenization Model Size:** 968 KB
- **Performance:** The compressed model's performance has been evaluated on various chat-oriented tasks, demonstrating efficiency in handling conversational text data while maintaining comparable performance to the original base model.
**Limitations:** While the compressed model offers significant reductions in size, there may be slight trade-offs in performance compared to the full-sized base model. It may not perform optimally on tasks outside the scope of chat-oriented text data in Hindi.
|
nyunai/OpenHathi-7B-Hi-v0.1-Base-AWQ-samvaad-hi-v1-tulu-format
|
nyunai
| 2024-03-06T09:59:28Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-06T09:38:22Z |
---
library_name: transformers
tags: []
---
## Model Description
This model is a compressed version of the OpenHathi-7B-Hi base model, optimized for chat format text data in the Hindi language. It has been quantized using the AWQ technique with calibration data from the samvaad-hi-v1 dataset. The compression process aims to reduce the model size while preserving its performance on chat-oriented tasks.
## Model Usage:
The compressed model can be utilized for various natural language processing tasks, particularly those involving chat format text data in Hindi. It can be deployed in conversational AI systems, chatbots, or any application requiring efficient processing of chat-style interactions.
## Performance Metrics:
- **Model Size:** 4.15 GB
- **Compression Technique:** AWQ
- **Calibration Data:** [samvaad-hi-v1 tulu format](https://huggingface.co/datasets/shwubham/samvaad-hi-v1-tulu-format) dataset
- **Tokenization Model Size:** 968 KB
- **Performance:** The compressed model's performance has been evaluated on various chat-oriented tasks, demonstrating efficiency in handling conversational text data while maintaining comparable performance to the original base model.
**Limitations:** While the compressed model offers significant reductions in size, there may be slight trade-offs in performance compared to the full-sized base model. It may not perform optimally on tasks outside the scope of chat-oriented text data in Hindi.
|
csukuangfj/sherpa-ncnn-toolchains
|
csukuangfj
| 2024-03-06T09:58:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-12-18T05:53:46Z |
# Introduction
Please refer to https://k2-fsa.github.io/sherpa/ncnn/index.html#
for usage.
|
joshus/esg_base_pos_3
|
joshus
| 2024-03-06T09:57:24Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-06T09:57:07Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# joshus/esg_base_pos_3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('joshus/esg_base_pos_3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=joshus/esg_base_pos_3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 108,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hiiamsid/mistral_yt_transcribe_classification_opt_train
|
hiiamsid
| 2024-03-06T09:44:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T17:49:44Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: mistral_yt_transcribe_classification_opt_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_yt_transcribe_classification_opt_train
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0393 | 1.0 | 640 | 0.0381 |
| 0.0334 | 2.0 | 1281 | 0.0340 |
| 0.0226 | 3.0 | 1921 | 0.0343 |
| 0.0275 | 4.0 | 2560 | 0.0335 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jelldps/malaysian-mistral-7b-32k-instructions-v4-gguf
|
jelldps
| 2024-03-06T09:41:56Z | 6 | 3 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"conversational",
"ms",
"base_model:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"base_model:quantized:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-27T10:32:08Z |
---
base_model: mesolitica/malaysian-mistral-7b-32k-instructions-v3.5
language:
- ms
---
# malaysian-mistral-7b-32k-instructions-v4 - GGUF
- Model creator: [Mesolitica](https://huggingface.co/mesolitica)
- Original model: [malaysian-mistral-7b-32k-instructions-v4](https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4)
|
vidhi0206/setfit-paraphrase-mpnet-emotion
|
vidhi0206
| 2024-03-06T09:41:22Z | 4 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-02-28T12:34:57Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: i honestly thought impossible at this point i feel pretty
- text: i feel convinced that im going to shy away from whatever is really good for
me
- text: i feel guilt that i should be more caring and im not
- text: i found myself feeling nostalgic as i thought about the temporarily abandoned
little bishop chronicles
- text: i am feeling very indecisive and spontaneous
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.5225
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'i feel so much better about that number'</li><li>'i feel like i have reached a plateau where im not buying as much as i use to and feeling more satisfied with my wardrobe and personal style'</li><li>'i feel especially thankful'</li></ul> |
| 3 | <ul><li>'i feel so violent just want to break some glass'</li><li>'i always feel rushed on the way to visit no comments'</li><li>'i think maybe about how strongly she feels about him and being there for him but brad looks really distracted'</li></ul> |
| 5 | <ul><li>'i feel like when i was a kid it was constantly impressed upon me how awesome ants are'</li><li>'i feel like it s a boy i would be pretty shocked if it was so somewhere in there my gut or my brain is saying girl'</li><li>'i feel like every day i walk around with so much stress and sadness that im literally amazed im still here that i still function that im still basically a friendly stable person'</li></ul> |
| 0 | <ul><li>'i would feel that a few words would be not only inadequate but a travesty'</li><li>'i attributed this depression to feeling inadequate against the unrealistic ideals of the lds church and while i still hold those ideals somewhat responsible i recognize this pattern of behavior'</li><li>'ive been resting and feeling generally unpleasant and queasy but in that frustrating background way where you dont feel right but cant place an exact cause'</li></ul> |
| 4 | <ul><li>'i was starting to feel scared for both of their safety and i wish those officers hadn t left no matter how much i hated them'</li><li>'i am already feeling frantic'</li><li>'i believe in you moment we all feel til then it s one more skeptical song'</li></ul> |
| 2 | <ul><li>'i do feel sympathetic to the parties involved now that their careers are down the drain'</li><li>'i like frappes and shit when im feeling naughty but i drink tea daily'</li><li>'i will pay a month for months and feel shame every time i grill a hot dog from that point on'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.5225 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotion")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 19.3333 | 48 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.3009 | - |
| 0.2083 | 50 | 0.1916 | - |
| 0.4167 | 100 | 0.0393 | - |
| 0.625 | 150 | 0.0129 | - |
| 0.8333 | 200 | 0.0034 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
s14pe/ppo-LunarLander-v2
|
s14pe
| 2024-03-06T09:23:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T14:14:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.00 +/- 15.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alfredplpl/gemma-2b-it-ja-poc-2
|
alfredplpl
| 2024-03-06T09:21:13Z | 2 | 2 |
peft
|
[
"peft",
"safetensors",
"ja",
"en",
"license:other",
"region:us"
] | null | 2024-03-05T12:17:24Z |
---
language:
- ja
- en
license: other
library_name: peft
license_name: gemma-terms-of-use
license_link: https://www.kaggle.com/models/google/gemma/license/consent
---
# はじめに
なんか日本語が話せる商用利用可能なAIです。
[Google Colab](https://colab.research.google.com/drive/1AZ3oW1RJ8JDi4DGh3_z__aAd1lUVlswi?usp=sharing)
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel
# トークナイザーとモデルの準備
tokenizer = AutoTokenizer.from_pretrained("alfredplpl/ja-aozora-wikipedia-gemmba-2b")
model = AutoModelForCausalLM.from_pretrained("alfredplpl/ja-aozora-wikipedia-gemmba-2b")
model = PeftModel.from_pretrained(model = model, model_id = "alfredplpl/gemma-2b-it-ja-poc-2")
# プロンプトの準備
prompt="""
あなたは親切なアシスタントです。英語は喋らず、日本語だけ喋ってください。
<start_of_turn>user
人生で大切なことはなんですか?<end_of_turn>
<start_of_turn>model
"""
# 推論の実行
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=128,
do_sample=True,
top_p=0.95,
temperature=0.2,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))
```
## Result
```bash
<bos>
あなたは親切なアシスタントです。英語は喋らず、日本語だけ喋ってください。
<start_of_turn>user
人生で大切なことはなんですか?<end_of_turn>
<start_of_turn>model
人生で大切なのは、幸せになることです。<end_of_turn>
<eos>
```
# Chat Templete
```bash
<bos>
{{system prompt}}
<start_of_turn>user
{{prompt}}<end_of_turn>
<start_of_turn>model
{{response}}<end_of_turn>
<eos>
```
# Base model
- free-ai-ltd/ja-aozora-wikipedia-gemmba-2b (private)
# Dataset for Instruction tuning
- llm-jp/databricks-dolly-15k-ja
- llm-jp/oasst1-21k-ja
- kunishou/oasst1-chat-44k-ja
- kunishou/oasst2-chat-68k-ja
- kunishou/cnn-dailymail-27k-ja
- kunishou/databricks-dolly-69k-ja-en-translation
- kunishou/databricks-dolly-15k-ja
- shi3z/OpenOrcaJapanese
# How to make this model
- [LoRA](https://gist.github.com/alfredplpl/e20cad036c151f38645a1abc87f56a2f)
|
DhairyaSarin/promotional-text-analyser-v2
|
DhairyaSarin
| 2024-03-06T09:11:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-03-06T09:10:46Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
zxhezexin/openlrm-mix-large-1.1
|
zxhezexin
| 2024-03-06T08:57:33Z | 45 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:57:53Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-mix-small-1.1
|
zxhezexin
| 2024-03-06T08:56:32Z | 31 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T07:05:06Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-obj-large-1.1
|
zxhezexin
| 2024-03-06T08:56:16Z | 22 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:42:12Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-obj-small-1.1
|
zxhezexin
| 2024-03-06T08:54:51Z | 50 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:35:29Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
ingeol/q2d
|
ingeol
| 2024-03-06T08:52:16Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-06T08:50:58Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/q2d
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/q2d')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/q2d')
model = AutoModel.from_pretrained('ingeol/q2d')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2d)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7797 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
hwkwon/S-SOLAR-10.7B-v1.1
|
hwkwon
| 2024-03-06T08:50:16Z | 2,258 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T08:34:50Z |
---
license: cc-by-nc-4.0
language:
- ko
---
# S-SOLAR-10.7B
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0).
### Trained Data
TBA
### Prompt Template
```
### User: User query input
### Assistant:
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
|
aryachakraborty/DeepSeek-1.3B-IT-NL-SQL-V2
|
aryachakraborty
| 2024-03-06T08:49:27Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T08:47:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VinitRuparelia/mountain
|
VinitRuparelia
| 2024-03-06T08:47:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T08:40:23Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Mountain Dreambooth model trained by VinitRuparelia following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: RGIT_669
Sample pictures of this concept:


|
AlignmentResearch/robust_llm_z685n973_from_EleutherAI_pythia-14m
|
AlignmentResearch
| 2024-03-06T08:44:50Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T08:44:44Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_z685n973_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_z685n973_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
minhah/videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
|
minhah
| 2024-03-06T08:43:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:minhah/videomae-base-finetuned-ucf101-subset",
"base_model:finetune:minhah/videomae-base-finetuned-ucf101-subset",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-03-06T07:10:58Z |
---
license: cc-by-nc-4.0
base_model: minhah/videomae-base-finetuned-ucf101-subset
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
This model is a fine-tuned version of [minhah/videomae-base-finetuned-ucf101-subset](https://huggingface.co/minhah/videomae-base-finetuned-ucf101-subset) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6593
- Accuracy: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 576
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.729 | 0.13 | 73 | 1.6346 | 0.3408 |
| 1.683 | 1.13 | 146 | 1.6505 | 0.3029 |
| 1.6889 | 2.13 | 219 | 1.6359 | 0.3408 |
| 1.6853 | 3.13 | 292 | 1.6739 | 0.2398 |
| 1.5793 | 4.13 | 365 | 1.6679 | 0.2588 |
| 1.5783 | 5.13 | 438 | 1.6091 | 0.3324 |
| 1.5745 | 6.13 | 511 | 1.6306 | 0.3072 |
| 1.5704 | 7.11 | 576 | 1.6573 | 0.2707 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
YMKiii/output
|
YMKiii
| 2024-03-06T08:40:26Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-28T07:39:24Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: Interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - YMKiii/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on Interior design using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
Litzy619/V0305P2
|
Litzy619
| 2024-03-06T08:39:00Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-06T02:27:50Z |
---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305P2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305P2
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3061 | 0.09 | 10 | 0.1617 |
| 0.1712 | 0.17 | 20 | 0.1558 |
| 0.1564 | 0.26 | 30 | 0.1535 |
| 0.1526 | 0.34 | 40 | 0.1479 |
| 0.1503 | 0.43 | 50 | 0.1506 |
| 0.1563 | 0.51 | 60 | 0.1505 |
| 0.1517 | 0.6 | 70 | 0.1507 |
| 0.1533 | 0.68 | 80 | 0.1489 |
| 0.1491 | 0.77 | 90 | 0.1488 |
| 0.1523 | 0.85 | 100 | 0.1471 |
| 0.1522 | 0.94 | 110 | 0.1433 |
| 0.1381 | 1.02 | 120 | 0.1229 |
| 0.1303 | 1.11 | 130 | 0.1206 |
| 0.1155 | 1.19 | 140 | 0.1018 |
| 0.1095 | 1.28 | 150 | 0.0933 |
| 0.103 | 1.37 | 160 | 0.0906 |
| 0.1007 | 1.45 | 170 | 0.0904 |
| 0.0895 | 1.54 | 180 | 0.0887 |
| 0.0914 | 1.62 | 190 | 0.0840 |
| 0.0943 | 1.71 | 200 | 0.0808 |
| 0.0938 | 1.79 | 210 | 0.0757 |
| 0.0884 | 1.88 | 220 | 0.0666 |
| 0.0862 | 1.96 | 230 | 0.0733 |
| 0.0709 | 2.05 | 240 | 0.0748 |
| 0.0601 | 2.13 | 250 | 0.0730 |
| 0.0593 | 2.22 | 260 | 0.0632 |
| 0.059 | 2.3 | 270 | 0.0757 |
| 0.06 | 2.39 | 280 | 0.0620 |
| 0.0647 | 2.47 | 290 | 0.0605 |
| 0.0619 | 2.56 | 300 | 0.0624 |
| 0.0651 | 2.65 | 310 | 0.0605 |
| 0.0578 | 2.73 | 320 | 0.0597 |
| 0.0585 | 2.82 | 330 | 0.0598 |
| 0.0575 | 2.9 | 340 | 0.0601 |
| 0.0566 | 2.99 | 350 | 0.0602 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
skyisblueandgreen4/dogbooth
|
skyisblueandgreen4
| 2024-03-06T08:32:31Z | 2 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-05T07:44:12Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
base_model: stabilityai/stable-diffusion-2-1
inference: true
instance_prompt: a photo of [v]dog
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - skyisblueandgreen4/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
AnonymousSub/FPDM_bertlarge_model
|
AnonymousSub
| 2024-03-06T08:32:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-06T08:30:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
amazingvince/bitllama-goodwiki
|
amazingvince
| 2024-03-06T08:26:31Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:BEE-spoke-data/goodwiki-deduped-split",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T01:57:01Z |
---
tags:
- generated_from_trainer
datasets:
- BEE-spoke-data/goodwiki-deduped-split
metrics:
- accuracy
model-index:
- name: bitllama-goodwiki
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: BEE-spoke-data/goodwiki-deduped-split
type: BEE-spoke-data/goodwiki-deduped-split
metrics:
- name: Accuracy
type: accuracy
value: 0.4285134482793542
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bitllama-goodwiki
This model was trained from scratch on the BEE-spoke-data/goodwiki-deduped-split dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0525
- Accuracy: 0.4285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 6.1199 | 0.04 | 100 | 6.0749 | 0.1542 |
| 5.3869 | 0.07 | 200 | 5.3267 | 0.2032 |
| 4.9187 | 0.11 | 300 | 4.8566 | 0.2386 |
| 4.6185 | 0.14 | 400 | 4.5535 | 0.2624 |
| 4.3509 | 0.18 | 500 | 4.3388 | 0.2801 |
| 4.1666 | 0.21 | 600 | 4.1692 | 0.2956 |
| 4.0456 | 0.25 | 700 | 4.0399 | 0.3089 |
| 3.9273 | 0.28 | 800 | 3.9318 | 0.3193 |
| 3.8447 | 0.32 | 900 | 3.8173 | 0.3327 |
| 3.7143 | 0.35 | 1000 | 3.7108 | 0.3461 |
| 3.6485 | 0.39 | 1100 | 3.6116 | 0.3590 |
| 3.5171 | 0.42 | 1200 | 3.5303 | 0.3693 |
| 3.4464 | 0.46 | 1300 | 3.4554 | 0.3780 |
| 3.3955 | 0.49 | 1400 | 3.3999 | 0.3851 |
| 3.3551 | 0.53 | 1500 | 3.3432 | 0.3919 |
| 3.2787 | 0.56 | 1600 | 3.2981 | 0.3974 |
| 3.2705 | 0.6 | 1700 | 3.2566 | 0.4023 |
| 3.2281 | 0.64 | 1800 | 3.2172 | 0.4075 |
| 3.1759 | 0.67 | 1900 | 3.1826 | 0.4118 |
| 3.1603 | 0.71 | 2000 | 3.1547 | 0.4152 |
| 3.1328 | 0.74 | 2100 | 3.1283 | 0.4186 |
| 3.0916 | 0.78 | 2200 | 3.1055 | 0.4215 |
| 3.0939 | 0.81 | 2300 | 3.0875 | 0.4238 |
| 3.0584 | 0.85 | 2400 | 3.0732 | 0.4257 |
| 3.0711 | 0.88 | 2500 | 3.0631 | 0.4271 |
| 3.0612 | 0.92 | 2600 | 3.0565 | 0.4280 |
| 3.081 | 0.95 | 2700 | 3.0534 | 0.4284 |
| 3.0378 | 0.99 | 2800 | 3.0525 | 0.4285 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
aditya11997/test_prior
|
aditya11997
| 2024-03-06T08:22:33Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"kandinsky",
"text-to-image",
"diffusers-training",
"dataset:kbharat7/DogChestXrayDatasetNew",
"base_model:kandinsky-community/kandinsky-2-2-prior",
"base_model:finetune:kandinsky-community/kandinsky-2-2-prior",
"license:creativeml-openrail-m",
"diffusers:KandinskyV22PriorPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T07:52:38Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-prior
datasets:
- kbharat7/DogChestXrayDatasetNew
tags:
- kandinsky
- text-to-image
- diffusers
- diffusers-training
inference: true
---
# Finetuning - aditya11997/test_prior
This pipeline was finetuned from **kandinsky-community/kandinsky-2-2-prior** on the **kbharat7/DogChestXrayDatasetNew** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['dogxraysmall']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("aditya11997/test_prior", torch_dtype=torch.float16)
pipe_t2i = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
prompt = "dogxraysmall"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
image = pipe_t2i(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 768
* Mixed-precision: None
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/aditya11997/text2image-fine-tune/runs/9j7m0fr8).
|
Dangurangu/my-awesome-setfit-model
|
Dangurangu
| 2024-03-06T07:54:55Z | 6 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:SetFit/SentEval-CR",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-03-06T07:54:02Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- SetFit/SentEval-CR
metrics:
- accuracy
widget:
- text: you can take pic of your friends and the picture will pop up when they call
.
- text: the speakerphone , the radio , all features work perfectly .
- text: 'a ) the picture quality ( color and sharpness of focusing ) are so great
, it completely eliminated my doubt about digital imaging -- - how could one eat
rice one grain at a time : - ) )'
- text: so far the dvd works so i hope it does n 't break down like the reviews i
've read .
- text: i have a couple hundred contacts and the menu loads within a few seconds ,
no big deal .
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: SetFit/SentEval-CR
type: SetFit/SentEval-CR
split: test
metrics:
- type: accuracy
value: 0.8804780876494024
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'* slick-looking design and improved interface'</li><li>'as for bluetooth , no problems at all .'</li><li>'2 ) storage capacity'</li></ul> |
| 0 | <ul><li>"the day finally arrived when i was sure i 'd leave sprint ."</li><li>"neither message was answered ( they ask for 24 hours before replying - i 've been waiting 27 days . )"</li><li>'only problem is that is a bit heavy .'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8805 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("dangurangu/my-awesome-setfit-model")
# Run inference
preds = model("the speakerphone , the radio , all features work perfectly .")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.0625 | 44 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 7 |
| 1 | 9 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2205 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
advaitadasein/blip2-opt-6.7b
|
advaitadasein
| 2024-03-06T07:51:34Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-03-06T07:46:50Z |
---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
inference: false
---
# BLIP-2, OPT-6.7b, pre-trained only
BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
|
alinerodrigues
| 2024-03-06T07:49:28Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-06T04:33:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1514
- Wer: 0.0869
- Cer: 0.0287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 22.1247 | 1.0 | 67 | 2.8766 | 1.0 | 1.0 |
| 4.814 | 2.0 | 134 | 0.3766 | 0.2506 | 0.0658 |
| 0.8372 | 3.0 | 201 | 0.2164 | 0.1177 | 0.0368 |
| 0.8372 | 4.0 | 268 | 0.1876 | 0.1106 | 0.0331 |
| 0.294 | 5.0 | 335 | 0.1951 | 0.1011 | 0.0325 |
| 0.2405 | 6.0 | 402 | 0.1718 | 0.0957 | 0.0300 |
| 0.2405 | 7.0 | 469 | 0.1647 | 0.0947 | 0.0296 |
| 0.1998 | 8.0 | 536 | 0.1709 | 0.0950 | 0.0305 |
| 0.1946 | 9.0 | 603 | 0.1730 | 0.0906 | 0.0299 |
| 0.1946 | 10.0 | 670 | 0.1695 | 0.0876 | 0.0289 |
| 0.1938 | 11.0 | 737 | 0.1649 | 0.0852 | 0.0282 |
| 0.1667 | 12.0 | 804 | 0.1644 | 0.0869 | 0.0280 |
| 0.1667 | 13.0 | 871 | 0.1534 | 0.0842 | 0.0275 |
| 0.163 | 14.0 | 938 | 0.1514 | 0.0869 | 0.0287 |
| 0.1568 | 15.0 | 1005 | 0.1583 | 0.0873 | 0.0287 |
| 0.1568 | 16.0 | 1072 | 0.1655 | 0.0856 | 0.0283 |
| 0.1465 | 17.0 | 1139 | 0.1691 | 0.0859 | 0.0272 |
| 0.138 | 18.0 | 1206 | 0.1777 | 0.0906 | 0.0290 |
| 0.138 | 19.0 | 1273 | 0.1652 | 0.0859 | 0.0280 |
| 0.1251 | 20.0 | 1340 | 0.1715 | 0.0856 | 0.0275 |
| 0.136 | 21.0 | 1407 | 0.1614 | 0.0832 | 0.0267 |
| 0.136 | 22.0 | 1474 | 0.1579 | 0.0805 | 0.0262 |
| 0.1179 | 23.0 | 1541 | 0.1777 | 0.0842 | 0.0277 |
| 0.1029 | 24.0 | 1608 | 0.1761 | 0.0825 | 0.0274 |
| 0.1029 | 25.0 | 1675 | 0.1665 | 0.0839 | 0.0275 |
| 0.1139 | 26.0 | 1742 | 0.1821 | 0.0801 | 0.0279 |
| 0.1019 | 27.0 | 1809 | 0.1807 | 0.0856 | 0.0279 |
| 0.1019 | 28.0 | 1876 | 0.1883 | 0.0812 | 0.0273 |
| 0.0911 | 29.0 | 1943 | 0.1904 | 0.0808 | 0.0272 |
| 0.0919 | 30.0 | 2010 | 0.1839 | 0.0862 | 0.0285 |
| 0.0919 | 31.0 | 2077 | 0.1902 | 0.0852 | 0.0282 |
| 0.084 | 32.0 | 2144 | 0.1934 | 0.0822 | 0.0275 |
| 0.0809 | 33.0 | 2211 | 0.2050 | 0.0822 | 0.0282 |
| 0.0809 | 34.0 | 2278 | 0.1955 | 0.0832 | 0.0281 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
DooDooHyun/AIFT-42dot_LLM-PLM-1.3B-v1.51
|
DooDooHyun
| 2024-03-06T07:43:13Z | 2,249 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"base_model:finetune:42dot/42dot_LLM-PLM-1.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:39:38Z |
---
license: cc-by-nc-4.0
base_model: 42dot/42dot_LLM-PLM-1.3B
tags:
- generated_from_trainer
model-index:
- name: AIFT-42dot_LLM-PLM-1.3B-v1.51
results: []
---
# AIFT-42dot_LLM-PLM-1.3B-v1.51
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
ise-uiuc/Magicoder-DS-6.7B
|
ise-uiuc
| 2024-03-06T07:40:45Z | 203 | 38 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"arxiv:2312.02120",
"arxiv:2305.06161",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-03T19:29:41Z |
---
license: other
library_name: transformers
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
license_name: deepseek
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.


## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
* **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-DS-6.7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## 📝 Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 🙏 Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
|
ITT-AF
| 2024-03-06T07:40:07Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:35:07Z |
---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
ychu612/BERTopic_vafn
|
ychu612
| 2024-03-06T07:33:20Z | 5 | 0 |
bertopic
|
[
"bertopic",
"text-classification",
"region:us"
] |
text-classification
| 2024-03-06T01:30:23Z |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# BERTopic_vafn
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("ychu612/BERTopic_vafn")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 3
* Number of training documents: 103
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | the - was - she - and - to | 15 | -1_the_was_she_and |
| 0 | the - she - was - and - her | 55 | 0_the_she_was_and |
| 1 | the - was - he - and - to | 33 | 1_the_was_he_and |
</details>
## Training hyperparameters
* calculate_probabilities: False
* language: english
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.23.0
* HDBSCAN: 0.8.33
* UMAP: 0.5.5
* Pandas: 2.1.4
* Scikit-Learn: 1.1.0
* Sentence-transformers: 2.3.1
* Transformers: 4.38.1
* Numba: 0.56.4
* Plotly: 5.9.0
* Python: 3.10.9
|
AbstractPerspective/Phi-2_MoE_GDPR
|
AbstractPerspective
| 2024-03-06T07:33:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:30:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anum231/food_classifier
|
anum231
| 2024-03-06T07:26:58Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:anum231/cancer_classifier_100",
"base_model:finetune:anum231/cancer_classifier_100",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-27T05:41:37Z |
---
license: apache-2.0
base_model: anum231/cancer_classifier_100
tags:
- generated_from_keras_callback
model-index:
- name: anum231/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# anum231/food_classifier
This model is a fine-tuned version of [anum231/cancer_classifier_100](https://huggingface.co/anum231/cancer_classifier_100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5815
- Validation Loss: 0.4561
- Train Accuracy: 0.8276
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6210 | 0.4706 | 0.8276 | 0 |
| 0.6095 | 0.4583 | 0.8103 | 1 |
| 0.6289 | 0.4566 | 0.8103 | 2 |
| 0.6230 | 0.5850 | 0.7241 | 3 |
| 0.5815 | 0.4561 | 0.8276 | 4 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
light77/gemma-Code-Instruct-Finetune-test-0.3
|
light77
| 2024-03-06T07:19:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:15:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kunger/Sakura-13B-LNovel-v0.9-4bit-AWQ
|
Kunger
| 2024-03-06T07:17:23Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-01T11:13:42Z |
---
license: cc-by-nc-sa-4.0
---
原始模型:`https://huggingface.co/SakuraLLM/Sakura-13B-LNovel-v0.9`
4Bit AWQ量化,未测试,不建议使用。
|
Kunger/Sakura-13B-Qwen2beta-v0.9-GGUF
|
Kunger
| 2024-03-06T07:17:15Z | 12 | 0 | null |
[
"gguf",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T06:23:57Z |
---
license: cc-by-nc-sa-4.0
---
原始模型:`https://huggingface.co/SakuraLLM/Sakura-13B-Qwen2beta-v0.9`
LLAMA.CPP直接转换,未经测试
|
Kunger/Sakura-13B-Qwen2beta-v0.9-4bit-AWQ
|
Kunger
| 2024-03-06T07:17:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-01T12:13:21Z |
---
license: cc-by-nc-sa-4.0
---
原始模型:`https://huggingface.co/SakuraLLM/Sakura-13B-Qwen2beta-v0.9`
4Bit AWQ量化,未测试,不建议使用。
|
eunyounglee/degreemotion-bert-finetuning-3
|
eunyounglee
| 2024-03-06T07:16:27Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T06:44:00Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
model-index:
- name: degreemotion-bert-finetuning-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# degreemotion-bert-finetuning-3
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Hadiboo/boguey
|
Hadiboo
| 2024-03-06T07:16:09Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"art",
"text-generation-inference",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"region:us"
] |
text-generation
| 2024-03-06T07:13:10Z |
---
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- code
- art
- text-generation-inference
---
|
Sumail/Golden_Waves04_2b
|
Sumail
| 2024-03-06T07:13:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sumail/Bubble_bee04_2b",
"base_model:finetune:Sumail/Bubble_bee04_2b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:42:38Z |
---
base_model:
- 0x0dad0/nous_nb00
- Sumail/Bubble_bee04_2b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [0x0dad0/nous_nb00](https://huggingface.co/0x0dad0/nous_nb00)
* [Sumail/Bubble_bee04_2b](https://huggingface.co/Sumail/Bubble_bee04_2b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb00
layer_range: [0, 18]
- model: Sumail/Bubble_bee04_2b
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb00
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
yuchiz/models
|
yuchiz
| 2024-03-06T07:11:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2024-03-06T07:11:40Z |
---
license: apache-2.0
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="yuchiz//tmp/tmpbswj31mg/yuchiz/models")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("yuchiz//tmp/tmpbswj31mg/yuchiz/models")
model = AutoModelForCausalLMWithValueHead.from_pretrained("yuchiz//tmp/tmpbswj31mg/yuchiz/models")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling
|
hiyouga
| 2024-03-06T07:10:29Z | 33 | 8 |
peft
|
[
"peft",
"safetensors",
"llama-factory",
"lora",
"generated_from_trainer",
"text-generation",
"conversational",
"en",
"dataset:vicgalle/alpaca-gpt4",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:hiyouga/glaive-function-calling-v2-sharegpt",
"base_model:ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf",
"base_model:adapter:ISTA-DASLab/Llama-2-70b-AQLM-2Bit-1x16-hf",
"license:llama2",
"region:us"
] |
text-generation
| 2024-03-05T03:22:32Z |
---
license: llama2
library_name: peft
tags:
- llama-factory
- lora
- generated_from_trainer
base_model: BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf
inference: false
model-index:
- name: llama2_70b_aqlm_toolcall
results: []
datasets:
- vicgalle/alpaca-gpt4
- glaiveai/glaive-function-calling-v2
- hiyouga/glaive-function-calling-v2-sharegpt
language:
- en
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LLaMA-2 70B AQLM 2-bit QLoRA with function calling
This model is fine-tuned from [BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf](https://huggingface.co/BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf) using [LLaMA Factory](https://github.com/hiyouga/LLaMA-Factory).
The maximum GPU usage during training is **24GB**, and the model has preliminary conversation and tool-using abilities.
It requires at least 20GB GRAM at inference.

## Training and evaluation data
This model is fine-tuned using 2,000 examples of the Alpaca-GPT4 and Glaive-function-calling-v2 datasets.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
model = AutoModelForCausalLM.from_pretrained("BlackSamorez/Llama-2-70b-AQLM-2Bit-1x16-hf", torch_dtype="auto", device_map="auto")
model = PeftModel.from_pretrained(model, "hiyouga/Llama-2-70b-AQLM-2Bit-QLoRA-function-calling")
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
messages = [
{
"role": "system",
"content": (
"You have access to the following tools:\n"
"> Tool Name: get_current_weather\nTool Description: Get the current weather in a given location\n"
"Tool Args:\n"
" - location (string, required): The city and state, e.g. San Francisco, CA\n"
" - unit (string): should be one of [\"celsius\", \"fahrenheit\"]\n\n"
"Use the following format if using a tool:\n"
"```\n"
"Action: tool name (one of [get_current_weather]).\n"
"Action Input: the input to the tool, in a JSON format representing the kwargs "
"(e.g. ```{\"input\": \"hello world\", \"num_beams\": 5}```).\n"
"```\n"
)
},
{"role": "user", "content": "What is the weather like in Boston?"}
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to("cuda")
generate_ids = model.generate(inputs, streamer=streamer)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results

### Benchmark results
| MMLU Benchmark | Bits | Metric | Accurary |
| --------------- | ---- | ------------- | -------- |
| Average | 2 | 5-shot, top-1 | 62.38 |
| STEM | 2 | 5-shot, top-1 | 51.57 |
| Social Sciences | 2 | 5-shot, top-1 | 73.44 |
| Humanities | 2 | 5-shot, top-1 | 57.82 |
| Other | 2 | 5-shot, top-1 | 68.56 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.2
|
ottopilot/PriyaBelleXL
|
ottopilot
| 2024-03-06T07:09:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-to-image
| 2024-03-06T07:07:58Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
RAW photo, portrait, close-up, PriBlle, looking at viewer, smiling, perfect
black hair with highlights, brown eyes, professional headshot, shot on
Hasselblad, perfect lighting, dutch angle, bokeh, outdoors, depth of field,
blue dress, warm, loving, friendly <lora:PriyaBelleXL_v1:1>
parameters:
negative_prompt: bindi, mole, facial marks
output:
url: images/00001-3916971016.png
- text: >-
PriBlle, very dark-skinned woman, solo focus, mixed media, realistic anime
art style, art by Yusuke Nakamura, fractal, ukiyoe, watercolor ink wash
technique, intricate, highly detailed. Inspired by multiracial Hindi-West
Indian heritage, San Francisco Bay Area, and diaspora.
<lora:PriyaBelleXL_v1:1>
output:
url: images/00002-2902012777.png
- text: >-
PriBlle as Princess Jasmine, mind controlled by Jafar, sexy red outfit,
tiara, collar, Agrabah palace, entranced by magic:1.1, glowing, compliant,
submissive, obedient, Disney's Aladdin bad end <lora:PriyaBelleXL_v1:1>
output:
url: images/00121-3666660946.png
- text: >-
PriBlle is a college student on campus, dark blue and gold hooded sweatshirt
with bear logo and shorts, Berkeley <lora:PriyaBelleXL_v1:1>
output:
url: images/00172-3938050706.png
- text: >-
PriBlle is hella fine shawty, hyphy, outdoors, Lake Merritt, Oakland,
NorCal, yay area <lora:PriyaBelleXL_v1:1>
output:
url: images/00156-519328175.png
- text: >-
PriBlle, a woman wearing a green Oakland Athletics cap and sexy fan gear,
smiling, ponytail, bodycon, bedroom, natural light, sexy, tease, flirty
<lora:PriyaBelleXL_v1:1>
output:
url: images/00328-1196258457.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PriBlle
license: cc-by-nc-nd-4.0
---
# Priya Belle (Ottoverse original character) - SDXL 1.0
<Gallery />
## Model description
https://huggingface.co/ottopilot/PriyaBelle, but trained for SDXL.
## Trigger words
You should use `PriBlle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ottopilot/PriyaBelleXL/tree/main) them in the Files & versions tab.
|
JesseStover/L2AI-dictionary-klue-bert-base
|
JesseStover
| 2024-03-06T06:47:19Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-03-04T13:52:44Z |
---
{}
---
The L2AI-dictionary model is fine-tuned checkpoint of [klue/bert-base](https://huggingface.co/klue/bert-base) for multiple choice, specifically for selecting the best dictionary definition of a given word in a sentence. Below is an example usage:
```python
import numpy as np
import torch
from transformers import AutoModelForMultipleChoice, AutoTokenizer
model_name = "JesseStover/L2AI-dictionary-klue-bert-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name)
model.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
prompts = "\"강아지는 뽀송뽀송하다.\"에 있는 \"강아지\"의 정의는 "
candidates = [
"\"(명사) 개의 새끼\"예요.",
"\"(명사) 부모나 할아버지, 할머니가 자식이나 손주를 귀여워하면서 부르는 말\"이예요."
]
inputs = tokenizer(
[[prompt, candidate] for candidate in candidates],
return_tensors="pt",
padding=True
)
labels = torch.tensor(0).unsqueeze(0)
with torch.no_grad():
outputs = model(
**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels
)
print({i: float(x) for i, x in enumerate(outputs.logits.softmax(1)[0])})
```
Training data was procured under Creative Commons [CC BY-SA 2.0 KR DEED](https://creativecommons.org/licenses/by-sa/2.0/kr/) from the National Institute of Korean Language's [Basic Korean Dictionary](https://krdict.korean.go.kr) and [Standard Korean Dictionary](https://stdict.korean.go.kr/).
|
vsocrates/incar-status-any
|
vsocrates
| 2024-03-06T06:44:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"medical",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T05:07:27Z |
---
'[object Object]': null
license: apache-2.0
language:
- en
library_name: transformers
tags:
- medical
widget:
- text: "Patient is a a formerly incarcerated individual having arrived in the ED with stomach pain."
- example_title: "Former Incarceration"
- text: "Patient arrived in the ED for chest pain."
- example_title: "No Incarceration"
---
# Model Card for incar-status-any
A Clinical Longformer-based model trained by the HAIL lab to predict incarceration status (past and present) in ED Notes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Vimig Socrates
- **Model type:** Longformer
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Finetuned from model:** [Clinical Lonformer](https://huggingface.co/yikuan8/Clinical-Longformer
)
## Uses
This model can be used to predict the incarceration status that a patient might have given most types of clinical ED notes.
## Bias, Risks, and Limitations
This should not be used directly without supervision from a physician as predicting incarceration status incorrectly can have significant negative social and clinical impacts.
## Training Details
### Training Data
This model was trained on custom annotated data labeled for incarceration status from Yale-New Haven Health Hospital System ED Notes.
### Training Procedure
## Evaluation
TODO
### Testing Data, Factors & Metrics
### Results
TODO
]
## Citation [optional]
Coming soon!
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Model Card Authors [optional]
Vimig Socrates
## Model Card Contact
Vimig Socrates: [[email protected]](mailto:[email protected])
|
samanthakarungi/fine-tuned-bert
|
samanthakarungi
| 2024-03-06T06:42:24Z | 5 | 1 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"finance",
"business",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-26T08:29:46Z |
---
language:
- en
widget:
- text: uber for today
- text: airtime and data
- text: breakfast meeting with client
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- finance
- text-classification
- business
---
### Model Description
<p>This model is a fine tuned version of the <a href="https://huggingface.co/distilbert/distilbert-base-uncased">distilbert-base-uncased</a> model on Hugging face. The model is trained to classify payment notes for business owners into one of the following categories.</p>
<ol>
<li>INVENTORY, SUPPLIES AND EQUIPMENT</li>
<li>PROFESSIONAL SERVICES</li>
<li>TRANSPORTATION AND TRAVEL</li>
<li>UTILITIES</li>
<li>EMPLOYEE BENEFITS AND COMPENSATION</li>
<li>MEALS AND ENTERTAINMENT</li>
<li>TAX PAYMENTS</li>
<li>LEGAL AND COMPLIANCE FEES</li>
<li>BUSINESS DEVELOPMENT AND INVESTMENT</li>
</ol>
### Base Model Description
<p>DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model.</p>
### Training results
<table>
<tr>
<th>Epoch</th>
<th>Training Loss</th>
<th>Validation Loss</th>
<th>Accuracy</th>
</tr>
<tr>
<th>0</th>
<th>No Log</th>
<th>0.263793</th>
<th>0.916230</th>
</tr>
<tr>
<th>1</th>
<th>No Log</th>
<th>0.185122</th>
<th>0.937173</th>
</tr>
<tr>
<th>2</th>
<th>0.318300</th>
<th>0.191695</th>
<th>0.937173</th>
</tr>
</table>
### Training results
<p>Check out the training code at this <a href="https://github.com/samanthaKarungi/iotec-pay-model-bert/tree/main/model/training_and_evaluation">github repo</a></p>
### Framework versions
<ul>
<li>Transformers 4.37.2</li>
<li>PyTorch 2.2.0</li>
<li>Datasets 2.17.1</li>
<li>Tokenizers 0.15.2</li>
</ul>
|
gokuls/wav2vec2-base-finetuned-ic-slurp
|
gokuls
| 2024-03-06T06:34:14Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-05T13:14:31Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ic-slurp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ic-slurp
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1101
- Accuracy: 0.7393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0345 | 1.0 | 527 | 3.9813 | 0.0673 |
| 3.5622 | 2.0 | 1055 | 3.4634 | 0.1867 |
| 2.7737 | 3.0 | 1582 | 2.7252 | 0.3638 |
| 2.1285 | 4.0 | 2110 | 2.1754 | 0.4827 |
| 1.6216 | 5.0 | 2637 | 1.8169 | 0.5701 |
| 1.1786 | 6.0 | 3165 | 1.5773 | 0.6347 |
| 0.8747 | 7.0 | 3692 | 1.5024 | 0.6568 |
| 0.7565 | 8.0 | 4220 | 1.5020 | 0.6694 |
| 0.5236 | 9.0 | 4747 | 1.5287 | 0.6799 |
| 0.4517 | 10.0 | 5275 | 1.5165 | 0.6879 |
| 0.364 | 11.0 | 5802 | 1.5159 | 0.6949 |
| 0.3221 | 12.0 | 6330 | 1.5217 | 0.6996 |
| 0.227 | 13.0 | 6857 | 1.5718 | 0.7075 |
| 0.1828 | 14.0 | 7385 | 1.6979 | 0.6901 |
| 0.1691 | 15.0 | 7912 | 1.6162 | 0.7093 |
| 0.1642 | 16.0 | 8440 | 1.6973 | 0.7048 |
| 0.1254 | 17.0 | 8967 | 1.7060 | 0.7100 |
| 0.1578 | 18.0 | 9495 | 1.7328 | 0.7063 |
| 0.1509 | 19.0 | 10022 | 1.7658 | 0.7073 |
| 0.1409 | 20.0 | 10550 | 1.7770 | 0.7052 |
| 0.1085 | 21.0 | 11077 | 1.8033 | 0.7074 |
| 0.106 | 22.0 | 11605 | 1.7000 | 0.7149 |
| 0.0764 | 23.0 | 12132 | 1.7943 | 0.7104 |
| 0.0671 | 24.0 | 12660 | 1.8323 | 0.7155 |
| 0.0768 | 25.0 | 13187 | 1.8486 | 0.7146 |
| 0.0741 | 26.0 | 13715 | 1.8227 | 0.7187 |
| 0.0731 | 27.0 | 14242 | 1.7824 | 0.7230 |
| 0.0935 | 28.0 | 14770 | 1.8987 | 0.7164 |
| 0.0829 | 29.0 | 15297 | 1.8774 | 0.7202 |
| 0.0588 | 30.0 | 15825 | 1.8820 | 0.7211 |
| 0.059 | 31.0 | 16352 | 1.9535 | 0.7246 |
| 0.0431 | 32.0 | 16880 | 1.9621 | 0.7237 |
| 0.0324 | 33.0 | 17407 | 2.0160 | 0.7256 |
| 0.0447 | 34.0 | 17935 | 1.9392 | 0.7262 |
| 0.025 | 35.0 | 18462 | 2.0095 | 0.7284 |
| 0.0522 | 36.0 | 18990 | 1.9994 | 0.7244 |
| 0.0482 | 37.0 | 19517 | 2.0566 | 0.7262 |
| 0.0203 | 38.0 | 20045 | 2.0287 | 0.7295 |
| 0.0221 | 39.0 | 20572 | 2.0634 | 0.7300 |
| 0.0444 | 40.0 | 21100 | 2.0593 | 0.7302 |
| 0.0348 | 41.0 | 21627 | 2.0712 | 0.7298 |
| 0.0154 | 42.0 | 22155 | 2.0429 | 0.7351 |
| 0.024 | 43.0 | 22682 | 2.0708 | 0.7352 |
| 0.0157 | 44.0 | 23210 | 2.0701 | 0.7368 |
| 0.0222 | 45.0 | 23737 | 2.0963 | 0.7338 |
| 0.0126 | 46.0 | 24265 | 2.1329 | 0.7340 |
| 0.0211 | 47.0 | 24792 | 2.1230 | 0.7370 |
| 0.0288 | 48.0 | 25320 | 2.1101 | 0.7393 |
| 0.0347 | 49.0 | 25847 | 2.1201 | 0.7375 |
| 0.0162 | 49.95 | 26350 | 2.1197 | 0.7381 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
davidilag/whisper-base-fo
|
davidilag
| 2024-03-06T06:27:05Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"fo",
"dataset:carlosdanielhernandezmena/ravnursson_asr",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-05T09:36:22Z |
---
language:
- fo
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- carlosdanielhernandezmena/ravnursson_asr
model-index:
- name: "Whisper Base Fo - D\xE1vid \xED L\xE1g"
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Base Fo - Dávid í Lág
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Ravnurson dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
anashrivastava/tinyllama-colorist-lora
|
anashrivastava
| 2024-03-06T06:23:59Z | 1 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T06:19:00Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
model-index:
- name: tinyllama-colorist-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-colorist-lora
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Aharneish/Gemma-Final
|
Aharneish
| 2024-03-06T06:17:35Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:other",
"region:us"
] | null | 2024-03-06T04:11:14Z |
---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
model-index:
- name: gemma-sprit-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-sprit-test
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 3.3436
- eval_runtime: 210.5944
- eval_samples_per_second: 0.95
- eval_steps_per_second: 0.237
- epoch: 0.81
- step: 1500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
Hiraishin/reranker-malaysian-mistral-474M
|
Hiraishin
| 2024-03-06T06:13:29Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T06:13:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hughlan1214/SER_wav2vec2-large-xlsr-53_fine-tuned_1.0
|
hughlan1214
| 2024-03-06T05:53:02Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-03-03T13:30:18Z |
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: SER_wav2vec2-large-xlsr-53_fine-tuned_1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SER_wav2vec2-large-xlsr-53_240303
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on a [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset.
This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels.
It achieves the following results on the evaluation set:
- Loss: 1.7923
- Accuracy: 0.2408
- Precision: 0.2324
- Recall: 0.2466
- F1: 0.2226
## For a better performance version, please refer to
[hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0](https://huggingface.co/hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0)
## Model description
The model was obtained through feature extraction using [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and underwent several rounds of fine-tuning. It predicts the 7 types of emotions contained in speech, aiming to lay the foundation for subsequent use of human micro-expressions on the visual level and context semantics under LLMS to infer user emotions in real-time.
Although the model was trained on purely English datasets, post-release testing showed that it also performs well in predicting emotions in Chinese and French, demonstrating the powerful cross-linguistic capability of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model.
```python
emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.9297 | 1.0 | 101 | 1.9452 | 0.1233 | 0.0306 | 0.1468 | 0.0454 |
| 1.9114 | 2.0 | 202 | 1.9115 | 0.1773 | 0.1501 | 0.1803 | 0.1323 |
| 1.7863 | 3.0 | 303 | 1.8564 | 0.2081 | 0.1117 | 0.2193 | 0.1336 |
| 1.8439 | 4.0 | 404 | 1.8590 | 0.2042 | 0.2196 | 0.2156 | 0.1755 |
| 1.9361 | 5.0 | 505 | 1.8375 | 0.2081 | 0.2617 | 0.2213 | 0.1573 |
| 1.7572 | 6.0 | 606 | 1.8081 | 0.2100 | 0.2018 | 0.2214 | 0.1841 |
| 1.6715 | 7.0 | 707 | 1.8131 | 0.2389 | 0.2263 | 0.2442 | 0.2129 |
| 1.6687 | 8.0 | 808 | 1.7923 | 0.2408 | 0.2324 | 0.2466 | 0.2226 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
LN1996/output_run_3
|
LN1996
| 2024-03-06T05:52:53Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-06T05:22:43Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a room with professional interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - LN1996/output_run_3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a room with professional interior design using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
UBC-NLP/InfoDCL-hashtag
|
UBC-NLP
| 2024-03-06T05:44:55Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"social media",
"contrastive learning",
"en",
"arxiv:2203.07648",
"license:cc",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-07-07T03:56:29Z |
---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
|
UBC-NLP/InfoDCL-BERTweet-emoji
|
UBC-NLP
| 2024-03-06T05:43:52Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"social media",
"contrastive learning",
"en",
"arxiv:2203.07648",
"license:cc",
"endpoints_compatible",
"region:us"
] | null | 2023-08-13T20:17:32Z |
---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
|
UBC-NLP/InfoDCL-BERTweet-hashtag
|
UBC-NLP
| 2024-03-06T05:43:23Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"social media",
"contrastive learning",
"en",
"arxiv:2203.07648",
"license:cc",
"endpoints_compatible",
"region:us"
] | null | 2023-08-13T18:38:56Z |
---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs).
|
Litzy619/V0305P4
|
Litzy619
| 2024-03-06T05:43:14Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-05T16:50:42Z |
---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305P4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305P4
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8125 | 0.09 | 10 | 0.6624 |
| 0.25 | 0.17 | 20 | 0.1568 |
| 0.1567 | 0.26 | 30 | 0.1543 |
| 0.1522 | 0.34 | 40 | 0.1471 |
| 0.1487 | 0.43 | 50 | 0.1446 |
| 0.1517 | 0.51 | 60 | 0.1370 |
| 0.1348 | 0.6 | 70 | 0.1154 |
| 0.1261 | 0.68 | 80 | 0.1077 |
| 0.1125 | 0.77 | 90 | 0.0915 |
| 0.1142 | 0.85 | 100 | 0.0879 |
| 0.1095 | 0.94 | 110 | 0.0932 |
| 0.1035 | 1.02 | 120 | 0.0936 |
| 0.094 | 1.11 | 130 | 0.0874 |
| 0.0899 | 1.19 | 140 | 0.0800 |
| 0.0875 | 1.28 | 150 | 0.0835 |
| 0.0887 | 1.37 | 160 | 0.0783 |
| 0.0884 | 1.45 | 170 | 0.0791 |
| 0.0819 | 1.54 | 180 | 0.0745 |
| 0.0831 | 1.62 | 190 | 0.0685 |
| 0.0878 | 1.71 | 200 | 0.0681 |
| 0.0847 | 1.79 | 210 | 0.0680 |
| 0.0798 | 1.88 | 220 | 0.0646 |
| 0.0757 | 1.96 | 230 | 0.0680 |
| 0.0653 | 2.05 | 240 | 0.0663 |
| 0.0557 | 2.13 | 250 | 0.0678 |
| 0.052 | 2.22 | 260 | 0.0634 |
| 0.0517 | 2.3 | 270 | 0.0654 |
| 0.0576 | 2.39 | 280 | 0.0593 |
| 0.0573 | 2.47 | 290 | 0.0584 |
| 0.056 | 2.56 | 300 | 0.0569 |
| 0.0597 | 2.65 | 310 | 0.0584 |
| 0.0514 | 2.73 | 320 | 0.0578 |
| 0.0533 | 2.82 | 330 | 0.0577 |
| 0.0538 | 2.9 | 340 | 0.0582 |
| 0.0507 | 2.99 | 350 | 0.0583 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
stablediffusionapi/hello-young-25d
|
stablediffusionapi
| 2024-03-06T05:37:15Z | 23 | 0 |
diffusers
|
[
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T05:36:38Z |
---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "hello-young-25d"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/hello-young-25d)
Model link: [View model](https://modelslab.com/models/hello-young-25d)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "hello-young-25d",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN**
|
youngko/mistral-test
|
youngko
| 2024-03-06T05:29:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T05:27:40Z |
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-Instruct-v0.1
model-index:
- name: mistral-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-test
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
min-dong/LLM_test1
|
min-dong
| 2024-03-06T04:58:42Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-03-06T04:47:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Padu98/ampazephyr-2-prompt-2-versuch-2
|
Padu98
| 2024-03-06T04:57:58Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:adapter:HuggingFaceH4/zephyr-7b-beta",
"license:mit",
"region:us"
] | null | 2024-03-05T15:08:33Z |
---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: HuggingFaceH4/zephyr-7b-beta
model-index:
- name: ampazephyr-2-prompt-2-versuch-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ampazephyr-2-prompt-2-versuch-2
This model is a fine-tuned version of [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3778
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 136 | 0.4528 |
| No log | 2.0 | 272 | 0.3838 |
| No log | 3.0 | 408 | 0.3778 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.16.0
- Tokenizers 0.15.2
|
hakuhodo-tech/japanese-clip-vit-h-14-bert-base
|
hakuhodo-tech
| 2024-03-06T04:52:02Z | 11 | 1 |
transformers
|
[
"transformers",
"safetensors",
"custom-clip-model",
"feature-extraction",
"clip",
"ja",
"japanese",
"japanese-clip",
"custom_code",
"arxiv:2103.00020",
"arxiv:2111.07991",
"license:cc-by-nc-sa-4.0",
"region:us"
] |
feature-extraction
| 2024-03-06T03:23:16Z |
---
license: cc-by-nc-sa-4.0
language:
- ja
tags:
- clip
- ja
- japanese
- japanese-clip
pipeline_tag: feature-extraction
---
# Japanese CLIP ViT-H/14 (Base)
## Table of Contents
1. [Overview](#overview)
1. [Usage](#usage)
1. [Model Details](#model-details)
1. [Evaluation](#evaluation)
1. [Limitations and Biases](#limitations-and-biases)
1. [Citation](#citation)
1. [See Also](#see-also)
1. [Contact Information](#contact-information)
## Overview
* **Developed by**: [HAKUHODO Technologies Inc.](https://www.hakuhodo-technologies.co.jp/)
* **Model type**: Contrastive Language-Image Pre-trained Model
* **Language(s)**: Japanese
* **LICENSE**: [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
Presented here is a Japanese [CLIP (Contrastive Language-Image Pre-training)](https://arxiv.org/abs/2103.00020) model,
mapping Japanese texts and images to a unified embedding space.
Capable of multimodal tasks including zero-shot image classification,
text-to-image retrieval, and image-to-text retrieval,
this model extends its utility when integrated with other components,
contributing to generative models like image-to-text and text-to-image generation.
## Usage
### Dependencies
```bash
python3 -m pip install pillow sentencepiece torch torchvision transformers
```
### Inference
The usage is similar to [`CLIPModel`](https://huggingface.co/docs/transformers/model_doc/clip)
and [`VisionTextDualEncoderModel`](https://huggingface.co/docs/transformers/model_doc/vision-text-dual-encoder).
```python
import requests
import torch
from PIL import Image
from transformers import AutoModel, AutoProcessor, BatchEncoding
# Download
model_name = "hakuhodo-tech/japanese-clip-vit-h-14-bert-base"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True).to(device)
processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
# Prepare raw inputs
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Process inputs
inputs = processor(
text=["犬", "猫", "象"],
images=image,
return_tensors="pt",
padding=True,
)
# Infer and output
outputs = model(**BatchEncoding(inputs).to(device))
probs = outputs.logits_per_image.softmax(dim=1)
print([f"{x:.2f}" for x in probs.flatten().tolist()]) # ['0.00', '1.00', '0.00']
```
## Model Details
### Components
The model consists of a frozen ViT-H image encoder from
[laion/CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K)
and a 12-layer 12-head BERT text encoder initialized from
[rinna/japanese-clip-vit-b-16](https://huggingface.co/rinna/japanese-clip-vit-b-16).
### Training
Model training is done by Zhi Wang with 8 A100 (80 GB) GPUs.
[Locked-image Tuning (LiT)](https://arxiv.org/abs/2111.07991) is adopted.
See more details in [the paper](https://www.anlp.jp/proceedings/annual_meeting/2024/pdf_dir/B6-5.pdf).
### Dataset
The Japanese subset of the [laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi) dataset containing ~120M image-text pairs.
## Evaluation
### Testing Data
The 5K evaluation set (val2017) of [MS-COCO](https://cocodataset.org/)
with [STAIR Captions](http://captions.stair.center/).
### Metrics
Zero-shot image-to-text and text-to-image recall@1, 5, 10.
### Results
| | | | | | | |
| :---------------------------------------------------------------------------------------------------------------------- | :------: | :------: | :------: | :------: | :------: | :------: |
| <td colspan=3 align=center>Text Retrieval</td> <td colspan=3 align=center>Image Retrieval</td> |
| | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
| [recruit-jp/japanese-clip-vit-b-32-roberta-base](https://huggingface.co/recruit-jp/japanese-clip-vit-b-32-roberta-base) | 23.0 | 46.1 | 57.4 | 16.1 | 35.4 | 46.3 |
| [rinna/japanese-cloob-vit-b-16](https://huggingface.co/rinna/japanese-cloob-vit-b-16) | 37.1 | 63.7 | 74.2 | 25.1 | 48.0 | 58.8 |
| [rinna/japanese-clip-vit-b-16](https://huggingface.co/rinna/japanese-clip-vit-b-16) | 36.9 | 64.3 | 74.3 | 24.8 | 48.8 | 60.0 |
| [**Japanese CLIP ViT-H/14 (Base)**](https://huggingface.co/hakuhodo-tech/japanese-clip-vit-h-14-bert-base) | 39.2 | 66.3 | 76.6 | 28.9 | 53.3 | 63.9 |
| [**Japanese CLIP ViT-H/14 (Deeper)**](https://huggingface.co/hakuhodo-tech/japanese-clip-vit-h-14-bert-deeper) | **48.7** | 74.0 | 82.4 | 36.5 | 61.5 | 71.8 |
| [**Japanese CLIP ViT-H/14 (Wider)**](https://huggingface.co/hakuhodo-tech/japanese-clip-vit-h-14-bert-wider) | 47.9 | **74.2** | **83.2** | **37.3** | **62.8** | **72.7** |
\* [Japanese Stable CLIP ViT-L/16](https://huggingface.co/stabilityai/japanese-stable-clip-vit-l-16) is excluded for zero-shot retrieval evaluation as
[the model was partially pre-trained with MS-COCO](https://huggingface.co/stabilityai/japanese-stable-clip-vit-l-16#training-dataset).
## Limitations and Biases
Despite our data filtering, it is crucial
to acknowledge the possibility of the training dataset
containing offensive or inappropriate content.
Users should be mindful of the potential societal impact
and ethical considerations associated with the outputs
generated by the model when deploying in production systems.
It is recommended not to employ the model for applications
that have the potential to cause harm or distress
to individuals or groups.
## Citation
If you found this model useful, please consider citing:
```bibtex
@article{japanese-clip-vit-h,
author = {王 直 and 細野 健人 and 石塚 湖太 and 奥田 悠太 and 川上 孝介},
journal = {言語処理学会年次大会発表論文集},
month = {Mar},
pages = {1547--1552},
title = {日本語特化の視覚と言語を組み合わせた事前学習モデルの開発 Developing Vision-Language Pre-Trained Models for {J}apanese},
volume = {30},
year = {2024}
}
```
## See Also
* [Japanese CLIP ViT-H/14 (Deeper)](https://huggingface.co/hakuhodo-tech/japanese-clip-vit-h-14-bert-deeper)
* [Japanese CLIP ViT-H/14 (Wider)](https://huggingface.co/hakuhodo-tech/japanese-clip-vit-h-14-bert-wider)
## Contact Information
Please contact
[hr-koho\@hakuhodo-technologies.co.jp](mailto:[email protected]?subject=Japanese%20CLIP%20ViT-H/14%20Models)
for questions and comments about the model,
and/or
for business and partnership inquiries.
お問い合わせは
[hr-koho\@hakuhodo-technologies.co.jp](mailto:[email protected]?subject=日本語CLIP%20ViT-H/14モデルについて)
にご連絡ください。
|
WokeEngineer/Reinforce-Pixelcopter-PLE-v0
|
WokeEngineer
| 2024-03-06T04:51:45Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T04:51:40Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 62.70 +/- 32.09
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
ChaoticNeutrals/Layris_9B
|
ChaoticNeutrals
| 2024-03-06T04:51:34Z | 7 | 7 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:ChaoticNeutrals/Eris_Remix_7B",
"base_model:merge:ChaoticNeutrals/Eris_Remix_7B",
"base_model:l3utterfly/mistral-7b-v0.1-layla-v4",
"base_model:merge:l3utterfly/mistral-7b-v0.1-layla-v4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T01:54:39Z |
---
base_model:
- ChaoticNeutrals/Eris_Remix_7B
- l3utterfly/mistral-7b-v0.1-layla-v4
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Layris

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B)
* [l3utterfly/mistral-7b-v0.1-layla-v4](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Eris_Remix_7B
layer_range: [0, 20]
- sources:
- model: l3utterfly/mistral-7b-v0.1-layla-v4
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|
cllm/deepseekcoder-7b-instruct-spider
|
cllm
| 2024-03-06T04:46:28Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-27T09:58:48Z |
Metadata:
AR loss to consistency loss ratio: 10: 1
Spider dataset size: 7k
n-token sequence length: 16
Jacobi trajectory data cleaning: True
Target model: Deepseek-Coder-7B fine-tuned on Spider
release date: 02/26/2024
|
leducanh1203/BioMistral7B_AutoTrain
|
leducanh1203
| 2024-03-06T04:41:58Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T04:41:50Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
brainventures/deplot_kr
|
brainventures
| 2024-03-06T04:38:11Z | 124 | 1 |
transformers
|
[
"transformers",
"pytorch",
"pix2struct",
"image-text-to-text",
"image-to-text",
"ko",
"arxiv:2212.10505",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2024-02-07T06:45:49Z |
---
language:
- ko
pipeline_tag: image-to-text
---
# **deplot_kr**
deplot_kr is a Image-to-Data(Text) model based on the google's pix2struct architecture.
It was fine-tuned from [DePlot](https://huggingface.co/google/deplot), using korean chart image-text pairs.
deplot_kr은 google의 pix2struct 구조를 기반으로 한 한국어 image-to-data(텍스트 형태의 데이터 테이블) 모델입니다.
[DePlot](https://huggingface.co/google/deplot) 모델을 한국어 차트 이미지-텍스트 쌍 데이터세트(30만 개)를 이용하여 fine-tuning 했습니다.
## **How to use**
You can run a prediction by input an image.
Model predict the data table of text form in the image.
이미지를 모델에 입력하면 모델은 이미지로부터 표 형태의 데이터 테이블을 예측합니다.
```python
from transformers import Pix2StructForConditionalGeneration, AutoProcessor
from PIL import Image
processor = AutoProcessor.from_pretrained("brainventures/deplot_kr")
model = Pix2StructForConditionalGeneration.from_pretrained("brainventures/deplot_kr")
image_path = "IMAGE_PATH"
image = Image.open(image_path)
inputs = processor(images=image, return_tensors="pt")
pred = model.generate(flattened_patches=flattened_patches, attention_mask=attention_mask, max_length=1024)
print(processor.batch_decode(deplot_generated_ids, skip_special_token=True)[0])
```
**Model Input Image**

**Model Output - Prediction**
대상:
제목: 2011-2021 보건복지 분야 일자리의 <unk>증
유형: 단일형 일반 세로 <unk>대형
| 보건(천 명) | 복지(천 명)
1분위 | 29.7 | 178.4
2분위 | 70.8 | 97.3
3분위 | 86.4 | 61.3
4분위 | 28.2 | 16.0
5분위 | 52.3 | 0.9
### **Preprocessing**
According to [Liu et al.(2023)](https://arxiv.org/pdf/2212.10505.pdf)...
- markdown format
- | : seperating cells (열 구분)
- \n : seperating rows (행 구분)
### **Train**
The model was trained in a TPU environment.
- num_warmup_steps : 1,000
- num_training_steps : 40,000
## **Evaluation Results**
This model achieves the following results:
|metrics name | % |
|:---|---:|
| RNSS (Relative Number Set Similarity)| 98.1615 |
|RMS (Relative Mapping Similarity) Precision | 83.1615 |
|RMS Recall | 26.3549 |
| RMS F1 Score | 31.5633 |
## Contact
For questions and comments, please use the discussion tab or email [email protected]
|
seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF
|
seyf1elislam
| 2024-03-06T04:33:26Z | 0 | 0 | null |
[
"gguf",
"GGUF",
"base_model:seyf1elislam/WestKunai-Hermes-long-128k-test-7b",
"base_model:quantized:seyf1elislam/WestKunai-Hermes-long-128k-test-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T02:14:44Z |
---
tags:
- GGUF
base_model:
- seyf1elislam/WestKunai-Hermes-long-128k-test-7b
---
# WestKunai-Hermes-long-128k-test-7b
- Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
- Original model: [WestKunai-Hermes-long-128k-test-7b](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [seyf1elislam's WestKunai-Hermes-long-128k-test-7b ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westkunai-hermes-long-128k-test-7b.Q2_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westkunai-hermes-long-128k-test-7b.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [WestKunai-Hermes-long-128k-test-7b.Q4_K_S.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/WestKunai-Hermes-long-128k-test-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westkunai-hermes-long-128k-test-7b.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westkunai-hermes-long-128k-test-7b.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westkunai-hermes-long-128k-test-7b.Q6_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westkunai-hermes-long-128k-test-7b.Q8_0.gguf ](https://huggingface.co/seyf1elislam/WestKunai-Hermes-long-128k-test-7b-GGUF/blob/main/westkunai-hermes-long-128k-test-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
bhaswata08/Llama-2-7b-chat-hf-function-calling-v3-AWQ
|
bhaswata08
| 2024-03-06T04:26:51Z | 48 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-05T09:45:58Z |
---
license: llama2
---
# Model Card for bhaswata08/Llama-2-7b-chat-hf-function-calling-v3-AWQ
Model creator: Trelis
Original model: Llama-2-7b-chat-hf-function-calling-v3
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DFJordan/binary-image-classifier
|
DFJordan
| 2024-03-06T04:23:02Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-06T04:03:43Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: binary-image-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-image-classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1302 | 1.0 | 67 | 0.1486 |
| 0.0503 | 2.0 | 134 | 0.1087 |
| 0.0188 | 3.0 | 201 | 0.1511 |
| 0.0116 | 4.0 | 268 | 0.1225 |
| 0.0088 | 5.0 | 335 | 0.1222 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lcfrs/gemma-2b-it
|
lcfrs
| 2024-03-06T04:18:41Z | 0 | 0 | null |
[
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T03:55:44Z |
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
Created by running `quantize` from `llama.cpp` on [gemma-2b-it.gguf](https://huggingface.co/google/gemma-2b-it-GGUF/blob/main/).
```sh
llama.cpp $ ./quantize models/gemma-2b-it.gguf models/gemma-2b-it-Q4_K_M.gguf Q4_K_M
main: build = 2351 (652ca2bd)
main: built with Android (10552028, +pgo, +bolt, +lto, -mlgo, based on r487747d) clang version 17.0.2 (https://android.googlesource.com/toolchain/llvm-project d9f89f4d16663d5012e5c09495f3b30ece3d2362) for x86_64-apple-darwin22.5.0
main: quantizing 'models/gemma-2b-it.gguf' to 'models/gemma-2b-it-Q4_K_M.gguf' as Q4_K_M
llama_model_loader: loaded meta data with 19 key-value pairs and 164 tensors from models/gemma-2b-it.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gemma
llama_model_loader: - kv 1: general.name str = gemma-2b-it
llama_model_loader: - kv 2: gemma.context_length u32 = 8192
llama_model_loader: - kv 3: gemma.block_count u32 = 18
llama_model_loader: - kv 4: gemma.embedding_length u32 = 2048
llama_model_loader: - kv 5: gemma.feed_forward_length u32 = 16384
llama_model_loader: - kv 6: gemma.attention.head_count u32 = 8
llama_model_loader: - kv 7: gemma.attention.head_count_kv u32 = 1
llama_model_loader: - kv 8: gemma.attention.key_length u32 = 256
llama_model_loader: - kv 9: gemma.attention.value_length u32 = 256
llama_model_loader: - kv 10: gemma.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 11: tokenizer.ggml.model str = llama
llama_model_loader: - kv 12: tokenizer.ggml.bos_token_id u32 = 2
llama_model_loader: - kv 13: tokenizer.ggml.eos_token_id u32 = 1
llama_model_loader: - kv 14: tokenizer.ggml.padding_token_id u32 = 0
llama_model_loader: - kv 15: tokenizer.ggml.unknown_token_id u32 = 3
llama_model_loader: - kv 16: tokenizer.ggml.tokens arr[str,256128] = ["<pad>", "<eos>", "<bos>", "<unk>", ...
llama_model_loader: - kv 17: tokenizer.ggml.scores arr[f32,256128] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 18: tokenizer.ggml.token_type arr[i32,256128] = [3, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - type f32: 164 tensors
llama_model_quantize_internal: meta size = 6042528 bytes
[..snip..]
llama_model_quantize_internal: model size = 9561.29 MB
llama_model_quantize_internal: quant size = 1549.19 MB
main: quantize time = 27285.22 ms
main: total time = 27285.22 ms
```
|
uname-n/tiny-aquatic-llama.10k
|
uname-n
| 2024-03-06T04:16:13Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:Open-Orca/SlimOrca-Dedup",
"dataset:uname-n/slim-orca-dedup-chat-10k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T02:30:10Z |
---
license: apache-2.0
datasets:
- Open-Orca/SlimOrca-Dedup
- uname-n/slim-orca-dedup-chat-10k
widget:
- text: "<|system|>\nYou are a chatbot who can help code!</s>\n<|user|>\nWrite me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.</s>\n<|assistant|>\n"
---
<div align="center">
# Tiny Aquatic Llama
</div>
#### This Model
This is a chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). The model was fine-tuned on a 10k sample from [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup).
#### Note
This model is deranged.
|
kaustavbhattacharjee/finetuning-DistillBERT-amazon-polarity
|
kaustavbhattacharjee
| 2024-03-06T04:11:33Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:amazon_polarity",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-05T23:03:33Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-DistillBERT-amazon-polarity
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: amazon_polarity
type: sentiment
args: default
metrics:
- type: accuracy
value: 0.9166666666666666
name: Accuracy
- type: loss
value: 0.1919892132282257
name: Loss
- type: f1
value: 0.9169435215946843
name: F1
datasets:
- amazon_polarity
pipeline_tag: text-classification
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-DistillBERT-amazon-polarity
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [Amazon Polarity](https://huggingface.co/datasets/amazon_polarity) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1920
- Accuracy: 0.9167
- F1: 0.9169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_3_10
|
OwOOwO
| 2024-03-06T04:05:24Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T04:02:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shg1421/t5_astrology_peft
|
shg1421
| 2024-03-06T04:01:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T01:20:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
erichen-0615/zephyr-7b-dpo-full
|
erichen-0615
| 2024-03-06T03:55:21Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-04T23:43:58Z |
---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- alignment-handbook
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: zephyr-7b-dpo-full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the HuggingFaceH4/ultrafeedback_binarized dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5026
- Rewards/chosen: -1.0472
- Rewards/rejected: -1.9901
- Rewards/accuracies: 0.7619
- Rewards/margins: 0.9430
- Logps/rejected: -460.7913
- Logps/chosen: -388.8264
- Logits/rejected: 1.7625
- Logits/chosen: 1.0392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 615
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5736 | 0.21 | 100 | 0.5842 | -0.3837 | -0.8468 | 0.7242 | 0.4632 | -346.4596 | -322.4754 | -2.3510 | -2.4330 |
| 0.5116 | 0.42 | 200 | 0.5308 | -0.9042 | -1.7331 | 0.7520 | 0.8289 | -435.0859 | -374.5288 | 0.5012 | 0.0459 |
| 0.5027 | 0.63 | 300 | 0.5084 | -0.8877 | -1.7467 | 0.7639 | 0.8590 | -436.4478 | -372.8834 | 1.8224 | 1.1385 |
| 0.4823 | 0.84 | 400 | 0.5037 | -1.1953 | -2.1521 | 0.7619 | 0.9568 | -476.9852 | -403.6375 | 1.9978 | 1.3075 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
Sumail/Bubble_bee04_2b
|
Sumail
| 2024-03-06T03:55:03Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:tomaszki/gemma-28",
"base_model:merge:tomaszki/gemma-28",
"base_model:tomaszki/gemma-28-copy",
"base_model:merge:tomaszki/gemma-28-copy",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T03:52:00Z |
---
base_model:
- tomaszki/gemma-28-copy
- tomaszki/gemma-28
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [tomaszki/gemma-28-copy](https://huggingface.co/tomaszki/gemma-28-copy)
* [tomaszki/gemma-28](https://huggingface.co/tomaszki/gemma-28)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: tomaszki/gemma-28
layer_range: [0, 18]
- model: tomaszki/gemma-28-copy
layer_range: [0, 18]
merge_method: slerp
base_model: tomaszki/gemma-28
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
prajapatisushilkumar/lion
|
prajapatisushilkumar
| 2024-03-06T03:54:02Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T03:47:31Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### lion Dreambooth model trained by prajapatisushilkumar following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23/cse/01
Sample pictures of this concept:

|
Granoladata/abnormality_classifier_biobert_v1_3_epochs
|
Granoladata
| 2024-03-06T03:52:27Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T03:52:18Z |
---
base_model: dmis-lab/biobert-v1.1
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: abnormality_classifier_biobert_v1_3_epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# abnormality_classifier_biobert_v1_3_epochs
This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2835
- Accuracy: 0.957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0099 | 1.0 | 1125 | 0.2986 | 0.952 |
| 0.0048 | 2.0 | 2250 | 0.2623 | 0.961 |
| 0.0063 | 3.0 | 3375 | 0.2835 | 0.957 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
dyang415/mixtral-fc-w-resp-new-format-4e-no-negative
|
dyang415
| 2024-03-06T03:47:37Z | 7 | 0 |
peft
|
[
"peft",
"safetensors",
"mixtral",
"generated_from_trainer",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-05T05:03:16Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model-index:
- name: mixtral-fc-w-resp-new-format-4e-no-negative
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
model_type: AutoModelForCausalLM
tokenizer_type: LlamaTokenizer
trust_remote_code: true
load_in_8bit: false
load_in_4bit: true
strict: false
chat_template: inst
datasets:
- path: ./data/with_function_response/function_not_used_training.jsonl
type: sharegpt
conversation: mistral
# - path: ./data/with_function_response/no_function_training.jsonl
# type: sharegpt
# conversation: mistral
- path: ./data/with_function_response/function_used_training.jsonl
type: sharegpt
conversation: mistral
dataset_prepared_path: last_run_prepared
val_set_size: 0.0
output_dir: ../mixtral-fc-w-resp-new-format-4e-no-negative
model_config:
output_router_logits: true
adapter: qlora
lora_model_dir:
sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true
lora_r: 32
lora_alpha: 64
lora_dropout: 0.05
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
wandb_project: function-call
wandb_name: mixtral-instruct-lora-no-negative
wandb_log_model: end
hub_model_id: dyang415/mixtral-fc-w-resp-new-format-4e-no-negative
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 4
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
logging_steps: 1
flash_attention: true
loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
weight_decay: 0.0
fsdp:
fsdp_config:
```
</details><br>
# mixtral-fc-w-resp-new-format-4e-no-negative
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: QuantizationMethod.BITS_AND_BYTES
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Framework versions
- PEFT 0.7.0
- Transformers 4.37.0
- Pytorch 2.0.1+cu117
- Datasets 2.17.1
- Tokenizers 0.15.0
|
Commandante/german-party-sentiment-bert
|
Commandante
| 2024-03-06T03:43:59Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:oliverguhr/german-sentiment-bert",
"base_model:finetune:oliverguhr/german-sentiment-bert",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-02-22T09:34:10Z |
---
license: mit
base_model: oliverguhr/german-sentiment-bert
tags:
- generated_from_trainer
model-index:
- name: german-party-sentiment-bert-complete-gsbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# German-Party-Sentiment-Bert
This model is a fine-tuned version of [oliverguhr/german-sentiment-bert](https://huggingface.co/oliverguhr/german-sentiment-bert) on a dataset consisting of mentions of german political parties.
It achieves the following results on the evaluation set:
- Loss: 0.8912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 20
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 120
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2844 | 1.0 | 65 | 0.9382 |
| 0.9704 | 2.0 | 130 | 0.8912 |
| 0.7394 | 3.0 | 195 | 1.0455 |
| 0.5401 | 4.0 | 260 | 1.2711 |
| 0.4274 | 5.0 | 325 | 1.3578 |
| 0.2289 | 6.0 | 390 | 1.6143 |
| 0.1949 | 7.0 | 455 | 1.8376 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Tokenizers 0.15.1
|
OwOOwO/eacc_dc3
|
OwOOwO
| 2024-03-06T03:43:30Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T03:23:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
seyf1elislam/WestKunai-XS-7b-GGUF
|
seyf1elislam
| 2024-03-06T03:43:19Z | 22 | 0 | null |
[
"gguf",
"GGUF",
"base_model:seyf1elislam/WestKunai-XS-7b",
"base_model:quantized:seyf1elislam/WestKunai-XS-7b",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T01:41:02Z |
---
tags:
- GGUF
base_model:
- seyf1elislam/WestKunai-X-7b
---
# WestKunai-X-7b
- Model creator: [seyf1elislam](https://huggingface.co/seyf1elislam)
- Original model: [WestKunai-X-7b](https://huggingface.co/seyf1elislam/WestKunai-X-7b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [seyf1elislam's WestKunai-X-7b ](https://huggingface.co/seyf1elislam/WestKunai-X-7b).
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [westkunai-x-7b.Q2_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q2_K.gguf ) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [westkunai-x-7b.Q3_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q3_K_M.gguf ) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [WestKunai-X-7b.Q4_K_S.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/WestKunai-X-7b.Q4_K_S.gguf ) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [westkunai-x-7b.Q4_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q4_K_M.gguf ) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [westkunai-x-7b.Q5_K_M.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q5_K_M.gguf ) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [westkunai-x-7b.Q6_K.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q6_K.gguf ) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [westkunai-x-7b.Q8_0.gguf ](https://huggingface.co/seyf1elislam/WestKunai-X-7b-GGUF/blob/main/westkunai-x-7b.Q8_0.gguf ) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
DingosGotMyBaby/uhn-twitch-chat
|
DingosGotMyBaby
| 2024-03-06T03:34:11Z | 100 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T15:59:10Z |
---
license: mit
---
# A model based on UberHaxorNova's Twitch chat
Trained on over 700 vods worth of chat and with some scuffed filtering it became a 300mb dataset.
## Dataset
The dataset was created by downloading all the available vods at the time of creation as a json file and stripping out all the chat messages into a simple line-by-line text file.
## Training
This was trained using [aitextgen](https://github.com/minimaxir/aitextgen), created by [Max Woolf](https://github.com/minimaxir), using the example notebook found [here](https://colab.research.google.com/drive/15qBZx5y9rdaQSyWpsreMDnTiZ5IlN0zD?usp=sharing). Using GPT-2's 124M model as the base, it was trained for 3000 steps and produces an output scuffed enough to look like a real Twitch chat user.
## Use
This was created as a fun little project for the discord server and as such, should only be used for fun and not to harm people. This model must also follow the ethics guide of the tool that created it https://github.com/minimaxir/aitextgen/blob/master/docs/ethics.md
|
guy-smiley/flan-t5-small-samsum-3
|
guy-smiley
| 2024-03-06T03:33:26Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"dataset:samsum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-02-12T18:29:37Z |
---
library_name: transformers
datasets:
- samsum
---
# Model Card for Model ID
A version of google/flan-t5-small, fine-tuned on the samsum dataset.
## Model Details
### Model Description
- **Developed by:** [guy-smiley](https://huggingface.co/guy-smiley)
- **Model type:** Language model
- **Language(s) (NLP):** English
- **Finetuned from model:** [flan-t5-small](https://huggingface.co/google/flan-t5-small)
## Uses
Chat and dialogue summarization
## Bias, Risks, and Limitations
"Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application."
## Training Details
### Training Data
[samsum](https://huggingface.co/datasets/samsum)
### Training Procedure
* Trained with [Seq2Se2Trainer](https://huggingface.co/docs/transformers/v4.38.2/en/main_classes/trainer#transformers.Seq2SeqTrainer)
* GitHub repo:
#### Training Hyperparameters
* learning_rate: 0.00005
* train_batch_size: 8
* eval_batch_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 1
#### Training Results
* epoch: 1
* train_loss: 1.83195
* eval
* eval_loss: 1.67304
* eval_rouge1: 42.8081
* eval_rouge2: 18.6456
* eval_rougeL: 35.4345
* eval_rougeLsum: 39.1534
|
omzn/facemark_detection
|
omzn
| 2024-03-06T03:25:52Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"ja",
"base_model:tohoku-nlp/bert-base-japanese-whole-word-masking",
"base_model:finetune:tohoku-nlp/bert-base-japanese-whole-word-masking",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-01-09T03:30:58Z |
---
language: ja
license: cc-by-sa-4.0
tags:
- generated_from_trainer
- text-classification
metrics:
- accuracy
widget:
- text: 💪(^ω^ 🍤)
example_title: Facemark 1
- text: (੭ु∂∀6)੭ु⁾⁾ ஐ•*¨*•.¸¸
example_title: Facemark 2
- text: :-P
example_title: Facemark 3
- text: (o.o)
example_title: Facemark 4
- text: (10/7~)
example_title: Non-facemark 1
- text: ??<<「ニャア(しゃーねぇな)」プイッ
example_title: Non-facemark 2
- text: (0.01)
example_title: Non-facemark 3
base_model: cl-tohoku/bert-base-japanese-whole-word-masking
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Facemark Detection
This model classifies given text into facemark(1) or not(0).
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking) on an original facemark dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1301
- Accuracy: 0.9896
## Model description
This model classifies given text into facemark(1) or not(0).
## Intended uses & limitations
Extract a facemark-prone potion of text and apply the text to the model.
Extraction of a facemark can be done by regex but usually includes many non-facemarks.
For example, I used the following regex pattern to extract a facemark-prone text by perl.
```perl
$input_text = "facemark prne text"
my $text = '[0-9A-Za-zぁ-ヶ一-龠]';
my $non_text = '[^0-9A-Za-zぁ-ヶ一-龠]';
my $allow_text = '[ovっつ゜ニノ三二]';
my $hw_kana = '[ヲ-゚]';
my $open_branket = '[\(∩꒰(]';
my $close_branket = '[\)∩꒱)]';
my $around_face = '(?:' . $non_text . '|' . $allow_text . ')*';
my $face = '(?!(?:' . $text . '|' . $hw_kana . '){3,8}).{3,8}';
my $face_char = $around_face . $open_branket . $face . $close_branket . $around_face;
my $facemark;
if ($input_text=~/($face_char)/) {
$facemark = $1;
}
```
Example of facemarks are:
```
(^U^)←
。\n\n⊂( *・ω・ )⊃
っ(。>﹏<)
タカ( ˘ω' ) ヤスゥ…
。(’↑▽↑)
……💰( ˘ω˘ )💰
ーーー(*´꒳`*)!(
…(o:∇:o)
!!…(;´Д`)?
(*´﹃ `*)✿
```
Examples of non-facemarks are:
```
(3,000円)
: (1/3)
(@nVApO)
(10/7~)
?<<「ニャア(しゃーねぇな)」プイッ
(残り 51字)
(-0.1602)
(25-0)
(コーヒー飲んだ)
(※軽トラ)
```
This model intended to use for a facemark-prone text like above.
## Training and evaluation data
Facemark data is collected manually and automatically from Twitter timeline.
* train.csv : 35591 samples (29911 facemark, 5680 non-facemark)
* test.csv : 3954 samples (3315 facemark, 639 non-facemark)
## Training procedure
```bash
python ./examples/pytorch/text-classification/run_glue.py \
--model_name_or_path=cl-tohoku/bert-base-japanese-whole-word-masking \
--do_train --do_eval \
--max_seq_length=128 --per_device_train_batch_size=32 \
--use_fast_tokenizer=False --learning_rate=2e-5 --num_train_epochs=50 \
--output_dir=facemark_classify \
--save_steps=1000 --save_total_limit=3 \
--train_file=train.csv \
--validation_file=test.csv
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50.0
### Training results
It achieves the following results on the evaluation set:
- Loss: 0.1301
- Accuracy: 0.9896
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.7.1
- Tokenizers 0.13.2
|
eunyounglee/diverge-bert-finetuning-1
|
eunyounglee
| 2024-03-06T03:21:21Z | 89 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T02:29:39Z |
---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
model-index:
- name: diverge-bert-finetuning-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# diverge-bert-finetuning-1
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
agnedil/Mistral-7B-openassistant-guanaco-v2
|
agnedil
| 2024-03-06T03:20:17Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"region:us"
] | null | 2024-03-05T08:55:35Z |
Model [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) fine-tuned on the [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset using the following [Colab notebook](https://colab.research.google.com/drive/19lYWzMvZAc2cWPojRiPnYIR5Ok62CgFQ?usp=drive_link).
|
julienkay/stable-diffusion-2-1
|
julienkay
| 2024-03-06T03:17:54Z | 0 | 0 | null |
[
"onnx",
"text-to-image",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-02-15T20:47:45Z |
---
license: openrail++
pipeline_tag: text-to-image
---
The official [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) model converted to ONNX for usage with Unity Sentis.
See [com.doji.diffusers](https://github.com/julienkay/com.doji.diffusers) for details.
|
ho1iday/pokemon-lora
|
ho1iday
| 2024-03-06T03:14:20Z | 1 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-05T12:34:41Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - ho1iday/pokemon-lora
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kharato/opt-125m-gptq-4bit
|
kharato
| 2024-03-06T03:10:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-03-06T03:10:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.