modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
LoneStriker/dolphin-2.2-yi-34b-200k-6.0bpw-h6-exl2
|
LoneStriker
| 2023-12-08T11:12:58Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"dataset:ehartford/dolphin",
"dataset:jondurbin/airoboros-2.2.1",
"dataset:ehartford/samantha-data",
"dataset:ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T10:56:50Z |
---
datasets:
- ehartford/dolphin
- jondurbin/airoboros-2.2.1
- ehartford/samantha-data
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
language:
- en
license: other
license_name: yi-license
license_link: LICENSE
---
Dolphin 2.2 🐬
https://erichartford.com/dolphin
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/KqsVXIvBd3akEjvijzww7.png" width="600" />
Dolphin-2.2-Yi-34b-200k's training was sponsored by [convai](https://www.convai.com/).
This model is based on Yi, and is subject to Yi license.
The base model has 200k context, I finetuned it with 16k.
Note: No longer need trust_remote_code! Thank you Yi team!
New in 2.2 is conversation and empathy. With an infusion of curated Samantha and WizardLM DNA, Dolphin can now give you personal advice and will care about your feelings, and with extra training in long multi-turn conversation.
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models
You are responsible for any content you create using this model. Enjoy responsibly.
## Dataset
This dataset is Dolphin, an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/)
I modified the dataset for uncensoring, deduping, cleaning, and quality.
I added Jon Durbin's excellent Airoboros dataset to increase creativity.
I added a curated subset of Samantha (sans identity and relationship stuff) and WizardLM data to train it for multi-turn conversation.
## Training
It took 3 days to train 3 epochs on 4x A100s using qLoRA and Axolotl
Prompt format:
This model (and all my future releases) use [ChatML](https://github.com/openai/openai-python/blob/main/chatml.md) prompt format.
```
<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
Example:
```
<|im_start|>system
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|>
<|im_start|>user
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|>
<|im_start|>assistant
```
## Gratitude
- This model was made possible by the generous sponsorship of Convai.
- Thank you to Microsoft for authoring the Orca paper and inspiring this work.
- Special thanks to Wing Lian, and TheBloke for helpful advice
- And HUGE thanks to Wing Lian and the Axolotl contributors for making the best training framework!
- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way.
## Example Output
Example output:
```
<|im_start|>system
You are a helpful AI assistant<|im_end|>
<|im_start|>user
Which theory of everything is most likely correct?<|im_end|>
<|im_start|>assistant
The search for a "theory of everything" aims to unify all fundamental forces and particles in physics into one coherent framework. Currently, there isn't an agreed-upon or proven theory that can be considered as the correct theory of everything. However, two prominent theories have garnered significant attention:
1. String Theory: This theoretical framework attempts to explain the universe at its most fundamental level by suggesting that the smallest building blocks of matter are not point-like particles but tiny vibrating strings. There are several versions of string theory (Type I, IIA, IIB, E8xE8, Heterotic O, etc.) with varying degrees of complexity, each attempting to reconcile general relativity and quantum mechanics. Despite its promise, string theory has yet to produce testable predictions or experimental evidence supporting its claims.
2. Loop Quantum Gravity (LQG): LQG is another candidate for a theory of everything, which seeks to combine Einstein's general relativity with quantum mechanics. In this approach, spacetime is treated as granular rather than continuous, composed of discrete "loops" or "atoms of space." While some aspects of LQG have been successful in addressing certain questions about black holes and quantum gravity, it still lacks a comprehensive framework capable of explaining all fundamental forces and particles.
Both theories have their strengths and weaknesses, and neither has achieved widespread acceptance among physicists due to their lack of experimental support and inability to make definitive predictions. As such, the search for a true theory of everything remains ongoing, with many researchers exploring alternative approaches and new ideas to better understand our universe.
```
[If you would like to financially support my efforts](https://ko-fi.com/erichartford)
[I also have some swag you can buy](https://fa7113.myshopify.com/)
|
chriss1245/LunarLander-v2
|
chriss1245
| 2023-12-08T11:01:01Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T11:00:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.36 +/- 17.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dreaver/ppo-SnowballTarget
|
Dreaver
| 2023-12-08T10:41:29Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-08T10:41:26Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dreaver/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nizarh1999/my_awesome_model
|
nizarh1999
| 2023-12-08T10:34:19Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T10:32:50Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1563
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 110 | 0.1579 | 0.9612 |
| No log | 2.0 | 220 | 0.1563 | 0.9703 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
enicholsonbmj/bert-finetuned-single-label-journal-classifier_not_quite_balanced
|
enicholsonbmj
| 2023-12-08T10:28:54Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T10:07:25Z |
---
license: mit
base_model: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-single-label-journal-classifier_not_quite_balanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-single-label-journal-classifier_not_quite_balanced
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4764
- eval_accuracy: 0.9135
- eval_f1: 0.9135
- eval_runtime: 6.8737
- eval_samples_per_second: 126.132
- eval_steps_per_second: 15.857
- epoch: 6.0
- step: 5838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
brucethemoose/functionary-7b-v1-Q8_0
|
brucethemoose
| 2023-12-08T10:18:43Z | 2 | 1 | null |
[
"gguf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-12-08T03:47:40Z |
---
license: llama2
---
Just high-bpw quantization of functionary for a drop-in OpenAI function calling replacement. See the llama-cpp-python docs:
https://llama-cpp-python.readthedocs.io/en/latest/server/
|
saradiaz/distilbert-base-uncased-simpleEng-classifier
|
saradiaz
| 2023-12-08T10:17:42Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:wikimedia/wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T10:06:38Z |
---
license: apache-2.0
datasets:
- wikimedia/wikipedia
language:
- en
metrics:
- glue
- accuracy
- f1
---
# Model Card for Model ID
This model detects if you are writing in a format that is more similar to Simple English or English.
## Model Details
It was finetuned with the wikiedia datasets for English (20220301.en) and Simple English (20220301.simple).
Evaluation data after training:
- 'eval_loss': 0.100
- 'eval_accuracy': 0.982
- 'eval_f1': 0.982
- 'eval_runtime': 10.236
- 'eval_samples_per_second': 58.616
- 'eval_steps_per_second': 7.327
- 'epoch': 10.0
### Model Description
- **Model type:** Text Classification
- **Language(s) (NLP):** English, Simple English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** distilbert-base-uncased
|
hungphongtrn/phobert-base-VietMed-v3
|
hungphongtrn
| 2023-12-08T10:15:47Z | 31 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T06:27:40Z |
---
base_model: vinai/phobert-base
tags:
- generated_from_trainer
model-index:
- name: phobert-base-VietMed-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-base-VietMed-v3
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.13.3
|
rohithjoseph/all_file
|
rohithjoseph
| 2023-12-08T10:10:22Z | 10 | 0 |
keras
|
[
"keras",
"region:us"
] | null | 2023-12-08T09:37:30Z |
## Deep Prediction Hub
Overview
Welcome to Deep Prediction Hub, a Streamlit web application that provides two deep learning-based tasks: Sentiment Classification and Tumor Detection.
Tasks
1. Sentiment Classification
This task involves classifying the sentiment of a given text into "Positive" or "Negative". Users can input a review, and the application provides the sentiment classification using various models.
2.Tumor Detection
In Tumor Detection, users can upload an image, and the application uses a Convolutional Neural Network (CNN) model to determine if a tumor is present or not.
Getting Started
Prerequisites
Python 3.6 or higher
Required packages: streamlit, numpy, cv2, PIL, tensorflow
Pre-trained models: PP.pkl, BP.pkl, DP.keras, RN.keras, LS.keras, CN.keras
Trained IMDb word index: Ensure the IMDb word index is available for sentiment classification.
Installation
Clone the repository: git clone https://github.com/yourusername/deep-prediction-hub.git
Usage
Access the application by opening the provided URL after running the Streamlit app.
Choose between "Sentiment Classification" and "Tumor Detection" tasks.
Sentiment Classification
Enter a review in the text area.
Select a model from the dropdown.
Click "Submit" and then "Classify Sentiment."
Tumor Detection
Upload an image using the file uploader.
Click "Detect Tumor" to perform tumor detection.
Models
Perceptron (PP.pkl): Perceptron-based sentiment classification model.
Backpropagation (BP.pkl): Backpropagation-based sentiment classification model.
DNN (DP.keras): Deep Neural Network sentiment classification model.
RNN (RN.keras): Recurrent Neural Network sentiment classification model.
LSTM (LS.keras): Long Short-Term Memory sentiment classification model.
CNN (CN.keras): Convolutional Neural Network tumor detection model.
Contributing
Feel free to contribute by opening issues or submitting pull requests. Please follow the contribution guidelines.
License
This project is licensed under the MIT License - see the LICENSE file for details.
|
nanom/vizwiz-flan-t5-base-ft-multicap
|
nanom
| 2023-12-08T09:53:42Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-answering",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
question-answering
| 2023-12-08T09:48:52Z |
---
license: apache-2.0
language:
- en
pipeline_tag: question-answering
inference: false
---
|
folflo/Bert2Bert_HunSum_1205
|
folflo
| 2023-12-08T09:49:58Z | 10 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"encoder-decoder",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-12-06T21:24:16Z |
---
tags:
- summarization
- generated_from_trainer
model-index:
- name: Bert2Bert_HunSum_1205
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bert2Bert_HunSum_1205
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 16
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Divyanshu97/donut-bs-level
|
Divyanshu97
| 2023-12-08T09:45:32Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2023-12-08T09:44:40Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-bs-level
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-bs-level
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MrKaya/Burak
|
MrKaya
| 2023-12-08T09:41:21Z | 1 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-08T09:26:03Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (1).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (2).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (3).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (4).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (5).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (6).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (7).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (8).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (9).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (10).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (11).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (12).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (13).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (14).jpeg
- text: burak
parameters:
negative_prompt: Arif
output:
url: images/Burak (15).jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: burak
---
# Burak
<Gallery />
## Model description
Burak
## Trigger words
You should use `burak` to trigger the image generation.
## Download model
[Download](/MrKaya/Burak/tree/main) them in the Files & versions tab.
|
Kekega/bart-cnn-samsum-finetuned
|
Kekega
| 2023-12-08T09:41:06Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-12-08T09:36:14Z |
---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4980
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0927 | 1.0 | 19 | 1.3663 |
| 0.102 | 2.0 | 38 | 0.6508 |
| 0.0959 | 3.0 | 57 | 0.4980 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Farhanherostar/Herotest
|
Farhanherostar
| 2023-12-08T09:30:53Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"ar",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T09:27:24Z |
---
license: apache-2.0
language:
- ar
metrics:
- accuracy
library_name: adapter-transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
brettbbb/cs_cot_256
|
brettbbb
| 2023-12-08T09:26:24Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T08:51:34Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: cs_cot_256
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs_cot_256
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
nyanxyz/llama2-test-4
|
nyanxyz
| 2023-12-08T09:15:05Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T09:11:42Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
nanom/vizwiz-t5-base-ft-multicap-v0
|
nanom
| 2023-12-08T09:10:43Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-answering",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
question-answering
| 2023-12-08T07:02:54Z |
---
license: apache-2.0
language:
- en
pipeline_tag: question-answering
inference: false
---
|
nanduzz/q-FrozenLake-v1-4x4-noSlippery
|
nanduzz
| 2023-12-08T09:07:02Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-11-26T10:43:34Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="nanduzz/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LTC-AI-Labs/L2-7b-Mini-Mythologic-Synthia
|
LTC-AI-Labs
| 2023-12-08T09:03:35Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"dataset:migtissera/Synthia-v1.3",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T07:07:59Z |
---
license: llama2
datasets:
- migtissera/Synthia-v1.3
language:
- en
pipeline_tag: text-generation
tags:
- conversational
---
Mythologic 7b trained with the synthia dataset
In my experience you can either get very detailed responses or very short reponses, you might have to tweak around the amount of generation lol
|
Yacong/dajin_girl_lora
|
Yacong
| 2023-12-08T09:03:14Z | 2 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-12-08T08:58:05Z |
---
license: creativeml-openrail-m
base_model: cyber_v33
instance_prompt: a photo of dajin girl
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - Yacong/dajin_girl_lora
These are LoRA adaption weights for cyber_v33. The weights were trained on a photo of dajin girl using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
jordyvl/lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate
|
jordyvl
| 2023-12-08T08:55:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-01T14:05:52Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-01_txt_vis_concat_enc_9_10_11_12_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9605
- Accuracy: 0.785
- Exit 0 Accuracy: 0.0625
- Exit 1 Accuracy: 0.2425
- Exit 2 Accuracy: 0.5225
- Exit 3 Accuracy: 0.72
- Exit 4 Accuracy: 0.785
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy | Exit 2 Accuracy | Exit 3 Accuracy | Exit 4 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|:---------------:|:---------------:|:---------------:|
| No log | 0.96 | 8 | 2.6964 | 0.1225 | 0.055 | 0.0625 | 0.0625 | 0.0625 | 0.1225 |
| No log | 1.96 | 16 | 2.6306 | 0.1775 | 0.05 | 0.0625 | 0.0625 | 0.0625 | 0.1775 |
| No log | 2.96 | 24 | 2.5176 | 0.2325 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.2325 |
| No log | 3.96 | 32 | 2.3854 | 0.28 | 0.045 | 0.0625 | 0.0625 | 0.0625 | 0.28 |
| No log | 4.96 | 40 | 2.2424 | 0.335 | 0.04 | 0.0625 | 0.0625 | 0.0625 | 0.335 |
| No log | 5.96 | 48 | 2.0887 | 0.395 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.395 |
| No log | 6.96 | 56 | 1.9008 | 0.5125 | 0.0425 | 0.0625 | 0.0625 | 0.0625 | 0.5125 |
| No log | 7.96 | 64 | 1.7061 | 0.575 | 0.04 | 0.0625 | 0.0625 | 0.0625 | 0.575 |
| No log | 8.96 | 72 | 1.5366 | 0.6075 | 0.0375 | 0.0625 | 0.0625 | 0.0625 | 0.6075 |
| No log | 9.96 | 80 | 1.3956 | 0.6475 | 0.0375 | 0.0625 | 0.0625 | 0.0625 | 0.6475 |
| No log | 10.96 | 88 | 1.2953 | 0.675 | 0.0275 | 0.0625 | 0.0625 | 0.0675 | 0.675 |
| No log | 11.96 | 96 | 1.2023 | 0.6775 | 0.025 | 0.0625 | 0.0625 | 0.07 | 0.6775 |
| No log | 12.96 | 104 | 1.1167 | 0.72 | 0.0325 | 0.0625 | 0.0625 | 0.0875 | 0.72 |
| No log | 13.96 | 112 | 1.0342 | 0.73 | 0.03 | 0.0625 | 0.0625 | 0.1025 | 0.73 |
| No log | 14.96 | 120 | 1.0137 | 0.7375 | 0.0325 | 0.0625 | 0.0625 | 0.115 | 0.7375 |
| No log | 15.96 | 128 | 0.9790 | 0.7375 | 0.0325 | 0.0625 | 0.0625 | 0.1175 | 0.7375 |
| No log | 16.96 | 136 | 0.9306 | 0.7675 | 0.035 | 0.0625 | 0.0625 | 0.1575 | 0.7675 |
| No log | 17.96 | 144 | 0.8941 | 0.77 | 0.04 | 0.0625 | 0.0625 | 0.14 | 0.77 |
| No log | 18.96 | 152 | 0.8953 | 0.765 | 0.0425 | 0.0625 | 0.0625 | 0.1825 | 0.765 |
| No log | 19.96 | 160 | 0.8898 | 0.77 | 0.04 | 0.0625 | 0.0625 | 0.2175 | 0.77 |
| No log | 20.96 | 168 | 0.8756 | 0.7725 | 0.04 | 0.0625 | 0.0625 | 0.2675 | 0.7725 |
| No log | 21.96 | 176 | 0.9026 | 0.755 | 0.045 | 0.0625 | 0.1 | 0.4175 | 0.755 |
| No log | 22.96 | 184 | 0.8717 | 0.7725 | 0.05 | 0.0625 | 0.1175 | 0.4225 | 0.7725 |
| No log | 23.96 | 192 | 0.9194 | 0.7525 | 0.05 | 0.0625 | 0.15 | 0.4775 | 0.7525 |
| No log | 24.96 | 200 | 0.8943 | 0.775 | 0.05 | 0.0675 | 0.1925 | 0.525 | 0.775 |
| No log | 25.96 | 208 | 0.8964 | 0.77 | 0.0525 | 0.0625 | 0.215 | 0.5225 | 0.77 |
| No log | 26.96 | 216 | 0.9143 | 0.76 | 0.0525 | 0.0625 | 0.25 | 0.5525 | 0.76 |
| No log | 27.96 | 224 | 0.9079 | 0.7775 | 0.0525 | 0.0625 | 0.29 | 0.56 | 0.7775 |
| No log | 28.96 | 232 | 0.9018 | 0.7775 | 0.055 | 0.0675 | 0.315 | 0.59 | 0.7775 |
| No log | 29.96 | 240 | 0.9091 | 0.7875 | 0.055 | 0.0725 | 0.355 | 0.615 | 0.7875 |
| No log | 30.96 | 248 | 0.9056 | 0.785 | 0.0625 | 0.0925 | 0.3775 | 0.64 | 0.785 |
| No log | 31.96 | 256 | 0.9164 | 0.79 | 0.06 | 0.125 | 0.42 | 0.6775 | 0.79 |
| No log | 32.96 | 264 | 0.9293 | 0.7875 | 0.0625 | 0.1425 | 0.4625 | 0.685 | 0.7875 |
| No log | 33.96 | 272 | 0.9669 | 0.7725 | 0.0575 | 0.215 | 0.48 | 0.6875 | 0.7725 |
| No log | 34.96 | 280 | 0.9342 | 0.785 | 0.06 | 0.23 | 0.4725 | 0.69 | 0.785 |
| No log | 35.96 | 288 | 0.9481 | 0.7725 | 0.0625 | 0.205 | 0.4525 | 0.6525 | 0.7725 |
| No log | 36.96 | 296 | 0.9447 | 0.7775 | 0.06 | 0.24 | 0.485 | 0.6875 | 0.7775 |
| No log | 37.96 | 304 | 0.9494 | 0.7925 | 0.0575 | 0.24 | 0.5025 | 0.7025 | 0.7925 |
| No log | 38.96 | 312 | 0.9329 | 0.775 | 0.0575 | 0.2225 | 0.46 | 0.695 | 0.775 |
| No log | 39.96 | 320 | 0.9247 | 0.7875 | 0.06 | 0.23 | 0.4725 | 0.6725 | 0.7875 |
| No log | 40.96 | 328 | 0.9184 | 0.7925 | 0.06 | 0.2325 | 0.465 | 0.665 | 0.7925 |
| No log | 41.96 | 336 | 0.9608 | 0.8025 | 0.06 | 0.1975 | 0.4625 | 0.65 | 0.8025 |
| No log | 42.96 | 344 | 0.9499 | 0.7875 | 0.06 | 0.2075 | 0.445 | 0.64 | 0.7875 |
| No log | 43.96 | 352 | 0.9789 | 0.7825 | 0.06 | 0.205 | 0.495 | 0.64 | 0.7825 |
| No log | 44.96 | 360 | 0.9384 | 0.78 | 0.06 | 0.2125 | 0.49 | 0.6725 | 0.78 |
| No log | 45.96 | 368 | 0.9734 | 0.77 | 0.06 | 0.2075 | 0.54 | 0.7125 | 0.77 |
| No log | 46.96 | 376 | 0.9647 | 0.785 | 0.0625 | 0.215 | 0.5325 | 0.735 | 0.785 |
| No log | 47.96 | 384 | 0.9484 | 0.78 | 0.0625 | 0.2225 | 0.515 | 0.725 | 0.78 |
| No log | 48.96 | 392 | 0.9652 | 0.7875 | 0.0625 | 0.2275 | 0.505 | 0.7325 | 0.7875 |
| No log | 49.96 | 400 | 0.9570 | 0.785 | 0.0625 | 0.22 | 0.4925 | 0.7225 | 0.785 |
| No log | 50.96 | 408 | 0.9432 | 0.7975 | 0.0625 | 0.2075 | 0.52 | 0.7275 | 0.7975 |
| No log | 51.96 | 416 | 0.9562 | 0.79 | 0.0625 | 0.225 | 0.5275 | 0.7325 | 0.79 |
| No log | 52.96 | 424 | 0.9567 | 0.79 | 0.0625 | 0.2375 | 0.5325 | 0.72 | 0.79 |
| No log | 53.96 | 432 | 0.9645 | 0.7875 | 0.0625 | 0.2425 | 0.5325 | 0.7175 | 0.7875 |
| No log | 54.96 | 440 | 0.9721 | 0.7825 | 0.0625 | 0.25 | 0.5275 | 0.725 | 0.7825 |
| No log | 55.96 | 448 | 0.9742 | 0.785 | 0.0625 | 0.2425 | 0.52 | 0.7275 | 0.785 |
| No log | 56.96 | 456 | 0.9699 | 0.785 | 0.0625 | 0.24 | 0.5225 | 0.725 | 0.785 |
| No log | 57.96 | 464 | 0.9637 | 0.785 | 0.0625 | 0.245 | 0.52 | 0.725 | 0.785 |
| No log | 58.96 | 472 | 0.9614 | 0.785 | 0.0625 | 0.2425 | 0.525 | 0.72 | 0.785 |
| No log | 59.96 | 480 | 0.9605 | 0.785 | 0.0625 | 0.2425 | 0.5225 | 0.72 | 0.785 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_1_gate
|
jordyvl
| 2023-12-08T08:55:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T09:28:06Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_1_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_1_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9706
- Accuracy: 0.75
- Exit 0 Accuracy: 0.0575
- Exit 1 Accuracy: 0.055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7554 | 0.09 | 0.0475 | 0.0625 |
| No log | 1.96 | 8 | 2.7048 | 0.1525 | 0.055 | 0.0625 |
| No log | 2.96 | 12 | 2.6225 | 0.18 | 0.0525 | 0.0625 |
| No log | 3.96 | 16 | 2.5506 | 0.1925 | 0.055 | 0.0675 |
| No log | 4.96 | 20 | 2.5032 | 0.22 | 0.05 | 0.0575 |
| No log | 5.96 | 24 | 2.4566 | 0.235 | 0.05 | 0.055 |
| No log | 6.96 | 28 | 2.3653 | 0.2975 | 0.05 | 0.0475 |
| No log | 7.96 | 32 | 2.2942 | 0.3375 | 0.0525 | 0.0525 |
| No log | 8.96 | 36 | 2.2127 | 0.35 | 0.0525 | 0.0425 |
| No log | 9.96 | 40 | 2.1131 | 0.3825 | 0.0525 | 0.0475 |
| No log | 10.96 | 44 | 2.0597 | 0.3875 | 0.0525 | 0.045 |
| No log | 11.96 | 48 | 1.9519 | 0.4325 | 0.0525 | 0.0425 |
| No log | 12.96 | 52 | 1.9216 | 0.4475 | 0.0525 | 0.045 |
| No log | 13.96 | 56 | 1.7836 | 0.4975 | 0.0525 | 0.0425 |
| No log | 14.96 | 60 | 1.7549 | 0.4975 | 0.055 | 0.0375 |
| No log | 15.96 | 64 | 1.6272 | 0.545 | 0.055 | 0.04 |
| No log | 16.96 | 68 | 1.5730 | 0.54 | 0.055 | 0.0425 |
| No log | 17.96 | 72 | 1.5355 | 0.565 | 0.055 | 0.04 |
| No log | 18.96 | 76 | 1.4481 | 0.5875 | 0.055 | 0.0375 |
| No log | 19.96 | 80 | 1.3854 | 0.6125 | 0.055 | 0.04 |
| No log | 20.96 | 84 | 1.3449 | 0.6375 | 0.055 | 0.0325 |
| No log | 21.96 | 88 | 1.2869 | 0.6425 | 0.055 | 0.0275 |
| No log | 22.96 | 92 | 1.2292 | 0.6575 | 0.0575 | 0.0325 |
| No log | 23.96 | 96 | 1.2049 | 0.6625 | 0.0575 | 0.025 |
| No log | 24.96 | 100 | 1.1601 | 0.6725 | 0.0575 | 0.035 |
| No log | 25.96 | 104 | 1.1601 | 0.7025 | 0.0575 | 0.0425 |
| No log | 26.96 | 108 | 1.1110 | 0.7 | 0.0575 | 0.0375 |
| No log | 27.96 | 112 | 1.0922 | 0.715 | 0.06 | 0.0425 |
| No log | 28.96 | 116 | 1.0862 | 0.7175 | 0.06 | 0.045 |
| No log | 29.96 | 120 | 1.0855 | 0.7025 | 0.06 | 0.045 |
| No log | 30.96 | 124 | 1.0540 | 0.715 | 0.06 | 0.0375 |
| No log | 31.96 | 128 | 1.0366 | 0.7175 | 0.06 | 0.045 |
| No log | 32.96 | 132 | 1.0407 | 0.72 | 0.06 | 0.045 |
| No log | 33.96 | 136 | 1.0148 | 0.73 | 0.06 | 0.0425 |
| No log | 34.96 | 140 | 1.0144 | 0.7275 | 0.06 | 0.05 |
| No log | 35.96 | 144 | 1.0054 | 0.7275 | 0.0575 | 0.0525 |
| No log | 36.96 | 148 | 1.0055 | 0.735 | 0.0575 | 0.055 |
| No log | 37.96 | 152 | 1.0171 | 0.7375 | 0.0575 | 0.06 |
| No log | 38.96 | 156 | 0.9861 | 0.745 | 0.0575 | 0.06 |
| No log | 39.96 | 160 | 0.9886 | 0.74 | 0.0575 | 0.06 |
| No log | 40.96 | 164 | 0.9734 | 0.74 | 0.0575 | 0.06 |
| No log | 41.96 | 168 | 0.9816 | 0.745 | 0.0575 | 0.0575 |
| No log | 42.96 | 172 | 0.9672 | 0.735 | 0.0575 | 0.055 |
| No log | 43.96 | 176 | 0.9909 | 0.7375 | 0.0575 | 0.0575 |
| No log | 44.96 | 180 | 0.9831 | 0.75 | 0.0575 | 0.055 |
| No log | 45.96 | 184 | 0.9673 | 0.7375 | 0.0575 | 0.055 |
| No log | 46.96 | 188 | 0.9762 | 0.75 | 0.0575 | 0.0525 |
| No log | 47.96 | 192 | 0.9666 | 0.745 | 0.0575 | 0.0525 |
| No log | 48.96 | 196 | 0.9714 | 0.7525 | 0.0575 | 0.0575 |
| No log | 49.96 | 200 | 0.9661 | 0.7475 | 0.0575 | 0.0575 |
| No log | 50.96 | 204 | 0.9707 | 0.745 | 0.0575 | 0.0575 |
| No log | 51.96 | 208 | 0.9830 | 0.7475 | 0.0575 | 0.0575 |
| No log | 52.96 | 212 | 0.9802 | 0.75 | 0.0575 | 0.05 |
| No log | 53.96 | 216 | 0.9683 | 0.75 | 0.0575 | 0.055 |
| No log | 54.96 | 220 | 0.9797 | 0.7475 | 0.0575 | 0.055 |
| No log | 55.96 | 224 | 0.9874 | 0.7525 | 0.0575 | 0.055 |
| No log | 56.96 | 228 | 0.9824 | 0.75 | 0.0575 | 0.055 |
| No log | 57.96 | 232 | 0.9755 | 0.745 | 0.0575 | 0.055 |
| No log | 58.96 | 236 | 0.9719 | 0.7475 | 0.0575 | 0.055 |
| No log | 59.96 | 240 | 0.9706 | 0.75 | 0.0575 | 0.055 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_2_gate
|
jordyvl
| 2023-12-08T08:55:25Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T11:19:12Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_2_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_2_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0252
- Accuracy: 0.7275
- Exit 0 Accuracy: 0.06
- Exit 1 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7540 | 0.09 | 0.04 | 0.0625 |
| No log | 1.96 | 8 | 2.7085 | 0.1475 | 0.05 | 0.0625 |
| No log | 2.96 | 12 | 2.6275 | 0.16 | 0.0525 | 0.0625 |
| No log | 3.96 | 16 | 2.5536 | 0.19 | 0.055 | 0.0625 |
| No log | 4.96 | 20 | 2.4924 | 0.215 | 0.0525 | 0.0625 |
| No log | 5.96 | 24 | 2.3988 | 0.2675 | 0.05 | 0.0625 |
| No log | 6.96 | 28 | 2.2891 | 0.3275 | 0.055 | 0.07 |
| No log | 7.96 | 32 | 2.2100 | 0.3525 | 0.0575 | 0.035 |
| No log | 8.96 | 36 | 2.1345 | 0.3725 | 0.055 | 0.05 |
| No log | 9.96 | 40 | 2.0877 | 0.385 | 0.055 | 0.0475 |
| No log | 10.96 | 44 | 1.9486 | 0.4725 | 0.055 | 0.0575 |
| No log | 11.96 | 48 | 1.8872 | 0.46 | 0.055 | 0.055 |
| No log | 12.96 | 52 | 1.7546 | 0.545 | 0.0575 | 0.0575 |
| No log | 13.96 | 56 | 1.6773 | 0.545 | 0.0575 | 0.0625 |
| No log | 14.96 | 60 | 1.5625 | 0.5725 | 0.0575 | 0.0625 |
| No log | 15.96 | 64 | 1.4749 | 0.5975 | 0.06 | 0.0625 |
| No log | 16.96 | 68 | 1.3829 | 0.6375 | 0.06 | 0.0625 |
| No log | 17.96 | 72 | 1.3381 | 0.6375 | 0.06 | 0.0625 |
| No log | 18.96 | 76 | 1.2999 | 0.635 | 0.06 | 0.0625 |
| No log | 19.96 | 80 | 1.2148 | 0.6875 | 0.06 | 0.0625 |
| No log | 20.96 | 84 | 1.1983 | 0.6625 | 0.06 | 0.0625 |
| No log | 21.96 | 88 | 1.1637 | 0.6775 | 0.06 | 0.0625 |
| No log | 22.96 | 92 | 1.1256 | 0.6875 | 0.06 | 0.0625 |
| No log | 23.96 | 96 | 1.1456 | 0.6675 | 0.06 | 0.0625 |
| No log | 24.96 | 100 | 1.0709 | 0.7025 | 0.06 | 0.0625 |
| No log | 25.96 | 104 | 1.1113 | 0.695 | 0.06 | 0.06 |
| No log | 26.96 | 108 | 1.0416 | 0.7175 | 0.06 | 0.065 |
| No log | 27.96 | 112 | 1.0716 | 0.6875 | 0.06 | 0.0675 |
| No log | 28.96 | 116 | 1.0192 | 0.7175 | 0.06 | 0.0625 |
| No log | 29.96 | 120 | 1.0237 | 0.705 | 0.06 | 0.0625 |
| No log | 30.96 | 124 | 1.0260 | 0.7075 | 0.06 | 0.0675 |
| No log | 31.96 | 128 | 0.9777 | 0.7275 | 0.06 | 0.0675 |
| No log | 32.96 | 132 | 1.0101 | 0.725 | 0.06 | 0.0675 |
| No log | 33.96 | 136 | 0.9693 | 0.7225 | 0.06 | 0.0725 |
| No log | 34.96 | 140 | 0.9973 | 0.725 | 0.06 | 0.0725 |
| No log | 35.96 | 144 | 0.9890 | 0.7225 | 0.06 | 0.0675 |
| No log | 36.96 | 148 | 0.9947 | 0.73 | 0.06 | 0.0725 |
| No log | 37.96 | 152 | 1.0048 | 0.725 | 0.06 | 0.0725 |
| No log | 38.96 | 156 | 0.9622 | 0.7275 | 0.06 | 0.065 |
| No log | 39.96 | 160 | 0.9894 | 0.7175 | 0.06 | 0.0675 |
| No log | 40.96 | 164 | 0.9635 | 0.735 | 0.06 | 0.0675 |
| No log | 41.96 | 168 | 0.9753 | 0.74 | 0.06 | 0.0725 |
| No log | 42.96 | 172 | 0.9858 | 0.72 | 0.06 | 0.0725 |
| No log | 43.96 | 176 | 0.9874 | 0.735 | 0.06 | 0.07 |
| No log | 44.96 | 180 | 0.9856 | 0.715 | 0.06 | 0.065 |
| No log | 45.96 | 184 | 1.0028 | 0.7275 | 0.06 | 0.0625 |
| No log | 46.96 | 188 | 1.0067 | 0.7325 | 0.06 | 0.0625 |
| No log | 47.96 | 192 | 0.9969 | 0.7275 | 0.06 | 0.0625 |
| No log | 48.96 | 196 | 0.9990 | 0.74 | 0.06 | 0.0625 |
| No log | 49.96 | 200 | 1.0065 | 0.735 | 0.06 | 0.0625 |
| No log | 50.96 | 204 | 1.0133 | 0.735 | 0.06 | 0.0625 |
| No log | 51.96 | 208 | 1.0113 | 0.735 | 0.06 | 0.0625 |
| No log | 52.96 | 212 | 1.0306 | 0.7275 | 0.06 | 0.0625 |
| No log | 53.96 | 216 | 1.0203 | 0.7275 | 0.06 | 0.0625 |
| No log | 54.96 | 220 | 1.0170 | 0.73 | 0.06 | 0.0625 |
| No log | 55.96 | 224 | 1.0214 | 0.725 | 0.06 | 0.0625 |
| No log | 56.96 | 228 | 1.0248 | 0.725 | 0.06 | 0.0625 |
| No log | 57.96 | 232 | 1.0252 | 0.7275 | 0.06 | 0.0625 |
| No log | 58.96 | 236 | 1.0254 | 0.7275 | 0.06 | 0.0625 |
| No log | 59.96 | 240 | 1.0252 | 0.7275 | 0.06 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate
|
jordyvl
| 2023-12-08T08:55:24Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T18:56:01Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_6_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9813
- Accuracy: 0.7425
- Exit 0 Accuracy: 0.055
- Exit 1 Accuracy: 0.1075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7555 | 0.09 | 0.0425 | 0.0675 |
| No log | 1.96 | 8 | 2.7103 | 0.145 | 0.0425 | 0.06 |
| No log | 2.96 | 12 | 2.6243 | 0.175 | 0.0475 | 0.0625 |
| No log | 3.96 | 16 | 2.5457 | 0.19 | 0.0475 | 0.06 |
| No log | 4.96 | 20 | 2.4802 | 0.22 | 0.055 | 0.0725 |
| No log | 5.96 | 24 | 2.3688 | 0.29 | 0.0525 | 0.1125 |
| No log | 6.96 | 28 | 2.2693 | 0.33 | 0.0525 | 0.105 |
| No log | 7.96 | 32 | 2.1806 | 0.35 | 0.0525 | 0.1275 |
| No log | 8.96 | 36 | 2.0652 | 0.395 | 0.0525 | 0.125 |
| No log | 9.96 | 40 | 1.9632 | 0.4325 | 0.0525 | 0.1225 |
| No log | 10.96 | 44 | 1.8904 | 0.4725 | 0.05 | 0.1175 |
| No log | 11.96 | 48 | 1.7364 | 0.5575 | 0.05 | 0.1225 |
| No log | 12.96 | 52 | 1.6698 | 0.5525 | 0.05 | 0.12 |
| No log | 13.96 | 56 | 1.5785 | 0.59 | 0.0525 | 0.115 |
| No log | 14.96 | 60 | 1.5102 | 0.5975 | 0.0575 | 0.12 |
| No log | 15.96 | 64 | 1.3949 | 0.64 | 0.0575 | 0.12 |
| No log | 16.96 | 68 | 1.3608 | 0.625 | 0.0575 | 0.125 |
| No log | 17.96 | 72 | 1.3010 | 0.64 | 0.0575 | 0.125 |
| No log | 18.96 | 76 | 1.2554 | 0.665 | 0.0575 | 0.125 |
| No log | 19.96 | 80 | 1.1876 | 0.675 | 0.0575 | 0.1225 |
| No log | 20.96 | 84 | 1.1572 | 0.705 | 0.0575 | 0.13 |
| No log | 21.96 | 88 | 1.1204 | 0.6875 | 0.0575 | 0.1275 |
| No log | 22.96 | 92 | 1.0787 | 0.71 | 0.0575 | 0.1375 |
| No log | 23.96 | 96 | 1.0833 | 0.7075 | 0.0575 | 0.13 |
| No log | 24.96 | 100 | 1.0345 | 0.725 | 0.0575 | 0.13 |
| No log | 25.96 | 104 | 1.0832 | 0.7 | 0.0575 | 0.14 |
| No log | 26.96 | 108 | 1.0061 | 0.7275 | 0.0575 | 0.1325 |
| No log | 27.96 | 112 | 1.0357 | 0.695 | 0.0575 | 0.13 |
| No log | 28.96 | 116 | 0.9696 | 0.7325 | 0.0575 | 0.1275 |
| No log | 29.96 | 120 | 1.0087 | 0.6975 | 0.0575 | 0.1275 |
| No log | 30.96 | 124 | 0.9800 | 0.72 | 0.0575 | 0.1175 |
| No log | 31.96 | 128 | 0.9802 | 0.715 | 0.0575 | 0.125 |
| No log | 32.96 | 132 | 0.9751 | 0.7375 | 0.0575 | 0.13 |
| No log | 33.96 | 136 | 0.9578 | 0.725 | 0.0575 | 0.1275 |
| No log | 34.96 | 140 | 0.9624 | 0.725 | 0.0575 | 0.1225 |
| No log | 35.96 | 144 | 0.9676 | 0.72 | 0.0575 | 0.13 |
| No log | 36.96 | 148 | 0.9572 | 0.73 | 0.0575 | 0.1175 |
| No log | 37.96 | 152 | 1.0086 | 0.7175 | 0.0575 | 0.125 |
| No log | 38.96 | 156 | 0.9555 | 0.735 | 0.055 | 0.11 |
| No log | 39.96 | 160 | 0.9469 | 0.74 | 0.055 | 0.115 |
| No log | 40.96 | 164 | 0.9835 | 0.7275 | 0.055 | 0.115 |
| No log | 41.96 | 168 | 0.9364 | 0.745 | 0.055 | 0.1075 |
| No log | 42.96 | 172 | 0.9590 | 0.74 | 0.055 | 0.105 |
| No log | 43.96 | 176 | 0.9499 | 0.7425 | 0.055 | 0.1025 |
| No log | 44.96 | 180 | 0.9731 | 0.7375 | 0.055 | 0.1 |
| No log | 45.96 | 184 | 0.9719 | 0.725 | 0.055 | 0.1025 |
| No log | 46.96 | 188 | 0.9669 | 0.7375 | 0.055 | 0.105 |
| No log | 47.96 | 192 | 0.9713 | 0.7325 | 0.055 | 0.11 |
| No log | 48.96 | 196 | 0.9738 | 0.7475 | 0.055 | 0.1075 |
| No log | 49.96 | 200 | 0.9662 | 0.7425 | 0.055 | 0.1025 |
| No log | 50.96 | 204 | 0.9848 | 0.73 | 0.055 | 0.1025 |
| No log | 51.96 | 208 | 0.9689 | 0.7475 | 0.055 | 0.11 |
| No log | 52.96 | 212 | 0.9690 | 0.7475 | 0.055 | 0.105 |
| No log | 53.96 | 216 | 0.9783 | 0.7475 | 0.055 | 0.1025 |
| No log | 54.96 | 220 | 0.9831 | 0.7425 | 0.055 | 0.1025 |
| No log | 55.96 | 224 | 0.9778 | 0.745 | 0.055 | 0.1025 |
| No log | 56.96 | 228 | 0.9740 | 0.7475 | 0.055 | 0.1025 |
| No log | 57.96 | 232 | 0.9754 | 0.7475 | 0.055 | 0.1075 |
| No log | 58.96 | 236 | 0.9793 | 0.7425 | 0.055 | 0.1075 |
| No log | 59.96 | 240 | 0.9813 | 0.7425 | 0.055 | 0.1075 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_5_gate
|
jordyvl
| 2023-12-08T08:55:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T16:54:10Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_5_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_5_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9668
- Accuracy: 0.7475
- Exit 0 Accuracy: 0.0575
- Exit 1 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7555 | 0.09 | 0.04 | 0.0625 |
| No log | 1.96 | 8 | 2.7056 | 0.14 | 0.0475 | 0.065 |
| No log | 2.96 | 12 | 2.6221 | 0.165 | 0.045 | 0.0575 |
| No log | 3.96 | 16 | 2.5429 | 0.1925 | 0.045 | 0.0625 |
| No log | 4.96 | 20 | 2.4773 | 0.225 | 0.0525 | 0.0625 |
| No log | 5.96 | 24 | 2.3640 | 0.295 | 0.0525 | 0.0625 |
| No log | 6.96 | 28 | 2.2786 | 0.3325 | 0.05 | 0.0625 |
| No log | 7.96 | 32 | 2.1765 | 0.3675 | 0.0525 | 0.0625 |
| No log | 8.96 | 36 | 2.0950 | 0.39 | 0.0525 | 0.0675 |
| No log | 9.96 | 40 | 1.9789 | 0.4475 | 0.055 | 0.07 |
| No log | 10.96 | 44 | 1.9028 | 0.455 | 0.055 | 0.07 |
| No log | 11.96 | 48 | 1.7707 | 0.505 | 0.055 | 0.0725 |
| No log | 12.96 | 52 | 1.7087 | 0.525 | 0.055 | 0.075 |
| No log | 13.96 | 56 | 1.6296 | 0.555 | 0.055 | 0.0525 |
| No log | 14.96 | 60 | 1.5540 | 0.575 | 0.0575 | 0.0575 |
| No log | 15.96 | 64 | 1.4520 | 0.6275 | 0.055 | 0.0475 |
| No log | 16.96 | 68 | 1.3974 | 0.63 | 0.055 | 0.0425 |
| No log | 17.96 | 72 | 1.3386 | 0.655 | 0.055 | 0.0425 |
| No log | 18.96 | 76 | 1.2821 | 0.665 | 0.055 | 0.04 |
| No log | 19.96 | 80 | 1.2171 | 0.68 | 0.055 | 0.0425 |
| No log | 20.96 | 84 | 1.1603 | 0.695 | 0.0525 | 0.05 |
| No log | 21.96 | 88 | 1.1320 | 0.7025 | 0.0525 | 0.05 |
| No log | 22.96 | 92 | 1.0933 | 0.7 | 0.0525 | 0.0425 |
| No log | 23.96 | 96 | 1.0627 | 0.715 | 0.0525 | 0.0425 |
| No log | 24.96 | 100 | 1.0525 | 0.72 | 0.0525 | 0.0575 |
| No log | 25.96 | 104 | 1.0423 | 0.715 | 0.0525 | 0.06 |
| No log | 26.96 | 108 | 1.0229 | 0.715 | 0.0525 | 0.06 |
| No log | 27.96 | 112 | 0.9955 | 0.73 | 0.0525 | 0.0575 |
| No log | 28.96 | 116 | 0.9720 | 0.73 | 0.055 | 0.06 |
| No log | 29.96 | 120 | 0.9958 | 0.715 | 0.055 | 0.07 |
| No log | 30.96 | 124 | 0.9662 | 0.74 | 0.055 | 0.07 |
| No log | 31.96 | 128 | 0.9828 | 0.73 | 0.055 | 0.065 |
| No log | 32.96 | 132 | 0.9726 | 0.73 | 0.055 | 0.065 |
| No log | 33.96 | 136 | 0.9644 | 0.73 | 0.055 | 0.0625 |
| No log | 34.96 | 140 | 0.9596 | 0.7375 | 0.055 | 0.0625 |
| No log | 35.96 | 144 | 0.9471 | 0.7375 | 0.055 | 0.0675 |
| No log | 36.96 | 148 | 0.9542 | 0.7375 | 0.055 | 0.0575 |
| No log | 37.96 | 152 | 0.9740 | 0.72 | 0.0525 | 0.0625 |
| No log | 38.96 | 156 | 0.9314 | 0.7525 | 0.0525 | 0.0625 |
| No log | 39.96 | 160 | 0.9308 | 0.75 | 0.055 | 0.0625 |
| No log | 40.96 | 164 | 0.9463 | 0.7475 | 0.0575 | 0.06 |
| No log | 41.96 | 168 | 0.9369 | 0.75 | 0.0575 | 0.06 |
| No log | 42.96 | 172 | 0.9585 | 0.735 | 0.0575 | 0.0525 |
| No log | 43.96 | 176 | 0.9400 | 0.75 | 0.0575 | 0.06 |
| No log | 44.96 | 180 | 0.9233 | 0.7625 | 0.0575 | 0.065 |
| No log | 45.96 | 184 | 0.9533 | 0.75 | 0.0575 | 0.0725 |
| No log | 46.96 | 188 | 0.9584 | 0.745 | 0.0575 | 0.065 |
| No log | 47.96 | 192 | 0.9659 | 0.7375 | 0.0575 | 0.06 |
| No log | 48.96 | 196 | 0.9480 | 0.7575 | 0.0575 | 0.0625 |
| No log | 49.96 | 200 | 0.9712 | 0.745 | 0.0575 | 0.065 |
| No log | 50.96 | 204 | 0.9800 | 0.7375 | 0.0575 | 0.0675 |
| No log | 51.96 | 208 | 0.9495 | 0.75 | 0.0575 | 0.0675 |
| No log | 52.96 | 212 | 0.9579 | 0.74 | 0.0575 | 0.0575 |
| No log | 53.96 | 216 | 0.9777 | 0.7475 | 0.0575 | 0.0525 |
| No log | 54.96 | 220 | 0.9731 | 0.7475 | 0.0575 | 0.05 |
| No log | 55.96 | 224 | 0.9647 | 0.7475 | 0.0575 | 0.0525 |
| No log | 56.96 | 228 | 0.9649 | 0.745 | 0.0575 | 0.0575 |
| No log | 57.96 | 232 | 0.9666 | 0.7475 | 0.0575 | 0.065 |
| No log | 58.96 | 236 | 0.9670 | 0.7475 | 0.0575 | 0.065 |
| No log | 59.96 | 240 | 0.9668 | 0.7475 | 0.0575 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate
|
jordyvl
| 2023-12-08T08:55:24Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T15:00:26Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_4_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0854
- Accuracy: 0.72
- Exit 0 Accuracy: 0.0625
- Exit 1 Accuracy: 0.0625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7541 | 0.09 | 0.04 | 0.0625 |
| No log | 1.96 | 8 | 2.7076 | 0.1425 | 0.055 | 0.0625 |
| No log | 2.96 | 12 | 2.6235 | 0.1725 | 0.055 | 0.0625 |
| No log | 3.96 | 16 | 2.5506 | 0.205 | 0.0525 | 0.0675 |
| No log | 4.96 | 20 | 2.4963 | 0.215 | 0.0525 | 0.075 |
| No log | 5.96 | 24 | 2.4094 | 0.2725 | 0.0575 | 0.0625 |
| No log | 6.96 | 28 | 2.2982 | 0.305 | 0.0575 | 0.0625 |
| No log | 7.96 | 32 | 2.2131 | 0.3375 | 0.0575 | 0.0625 |
| No log | 8.96 | 36 | 2.1493 | 0.3325 | 0.0575 | 0.0625 |
| No log | 9.96 | 40 | 2.0452 | 0.375 | 0.06 | 0.0625 |
| No log | 10.96 | 44 | 1.9553 | 0.4275 | 0.06 | 0.0625 |
| No log | 11.96 | 48 | 1.8302 | 0.5075 | 0.06 | 0.0625 |
| No log | 12.96 | 52 | 1.7440 | 0.54 | 0.06 | 0.0625 |
| No log | 13.96 | 56 | 1.6427 | 0.5575 | 0.0625 | 0.0625 |
| No log | 14.96 | 60 | 1.5650 | 0.6 | 0.0625 | 0.0625 |
| No log | 15.96 | 64 | 1.4875 | 0.62 | 0.065 | 0.0625 |
| No log | 16.96 | 68 | 1.4124 | 0.645 | 0.0625 | 0.0625 |
| No log | 17.96 | 72 | 1.3870 | 0.65 | 0.0625 | 0.065 |
| No log | 18.96 | 76 | 1.2946 | 0.675 | 0.0575 | 0.0625 |
| No log | 19.96 | 80 | 1.2652 | 0.67 | 0.0575 | 0.065 |
| No log | 20.96 | 84 | 1.1967 | 0.6975 | 0.0575 | 0.065 |
| No log | 21.96 | 88 | 1.1782 | 0.695 | 0.0575 | 0.065 |
| No log | 22.96 | 92 | 1.1372 | 0.6975 | 0.0575 | 0.065 |
| No log | 23.96 | 96 | 1.1193 | 0.7025 | 0.0575 | 0.065 |
| No log | 24.96 | 100 | 1.0844 | 0.7175 | 0.06 | 0.0625 |
| No log | 25.96 | 104 | 1.0746 | 0.705 | 0.06 | 0.0625 |
| No log | 26.96 | 108 | 1.0720 | 0.72 | 0.06 | 0.065 |
| No log | 27.96 | 112 | 1.0455 | 0.72 | 0.06 | 0.0625 |
| No log | 28.96 | 116 | 1.0266 | 0.7275 | 0.06 | 0.0625 |
| No log | 29.96 | 120 | 1.0322 | 0.7275 | 0.0625 | 0.0625 |
| No log | 30.96 | 124 | 1.0310 | 0.73 | 0.0625 | 0.0625 |
| No log | 31.96 | 128 | 1.0186 | 0.7275 | 0.0625 | 0.0625 |
| No log | 32.96 | 132 | 1.0236 | 0.725 | 0.0625 | 0.0625 |
| No log | 33.96 | 136 | 1.0142 | 0.73 | 0.0625 | 0.0625 |
| No log | 34.96 | 140 | 1.0096 | 0.7225 | 0.0625 | 0.0625 |
| No log | 35.96 | 144 | 0.9953 | 0.74 | 0.0625 | 0.0625 |
| No log | 36.96 | 148 | 1.0065 | 0.7275 | 0.0625 | 0.0625 |
| No log | 37.96 | 152 | 1.0247 | 0.725 | 0.06 | 0.0625 |
| No log | 38.96 | 156 | 1.0083 | 0.7325 | 0.06 | 0.0625 |
| No log | 39.96 | 160 | 1.0138 | 0.725 | 0.0625 | 0.0625 |
| No log | 40.96 | 164 | 1.0311 | 0.725 | 0.065 | 0.0625 |
| No log | 41.96 | 168 | 1.0301 | 0.7175 | 0.0625 | 0.0625 |
| No log | 42.96 | 172 | 1.0492 | 0.725 | 0.0625 | 0.0625 |
| No log | 43.96 | 176 | 1.0431 | 0.7225 | 0.0625 | 0.0625 |
| No log | 44.96 | 180 | 1.0565 | 0.7275 | 0.0625 | 0.06 |
| No log | 45.96 | 184 | 1.0747 | 0.7175 | 0.0625 | 0.0625 |
| No log | 46.96 | 188 | 1.0534 | 0.7125 | 0.0625 | 0.06 |
| No log | 47.96 | 192 | 1.0610 | 0.72 | 0.0625 | 0.06 |
| No log | 48.96 | 196 | 1.0721 | 0.7175 | 0.0625 | 0.0625 |
| No log | 49.96 | 200 | 1.0744 | 0.7175 | 0.0625 | 0.0625 |
| No log | 50.96 | 204 | 1.0776 | 0.72 | 0.0625 | 0.0625 |
| No log | 51.96 | 208 | 1.0777 | 0.72 | 0.0625 | 0.0625 |
| No log | 52.96 | 212 | 1.0831 | 0.7225 | 0.0625 | 0.0625 |
| No log | 53.96 | 216 | 1.0786 | 0.7225 | 0.0625 | 0.0625 |
| No log | 54.96 | 220 | 1.0806 | 0.725 | 0.0625 | 0.0625 |
| No log | 55.96 | 224 | 1.0844 | 0.7275 | 0.0625 | 0.0625 |
| No log | 56.96 | 228 | 1.0858 | 0.73 | 0.0625 | 0.0625 |
| No log | 57.96 | 232 | 1.0861 | 0.7225 | 0.0625 | 0.0625 |
| No log | 58.96 | 236 | 1.0857 | 0.72 | 0.0625 | 0.0625 |
| No log | 59.96 | 240 | 1.0854 | 0.72 | 0.0625 | 0.0625 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_8_gate
|
jordyvl
| 2023-12-08T08:55:23Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T22:57:37Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_8_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_8_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0199
- Accuracy: 0.7575
- Exit 0 Accuracy: 0.06
- Exit 1 Accuracy: 0.12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7540 | 0.09 | 0.0425 | 0.0625 |
| No log | 1.96 | 8 | 2.7063 | 0.1525 | 0.045 | 0.0625 |
| No log | 2.96 | 12 | 2.6279 | 0.1775 | 0.05 | 0.0625 |
| No log | 3.96 | 16 | 2.5448 | 0.2025 | 0.0525 | 0.0625 |
| No log | 4.96 | 20 | 2.4829 | 0.225 | 0.05 | 0.0625 |
| No log | 5.96 | 24 | 2.3776 | 0.285 | 0.0525 | 0.0625 |
| No log | 6.96 | 28 | 2.2785 | 0.33 | 0.055 | 0.0625 |
| No log | 7.96 | 32 | 2.1809 | 0.3575 | 0.055 | 0.0625 |
| No log | 8.96 | 36 | 2.0830 | 0.3875 | 0.055 | 0.085 |
| No log | 9.96 | 40 | 1.9854 | 0.4275 | 0.055 | 0.07 |
| No log | 10.96 | 44 | 1.8839 | 0.485 | 0.055 | 0.07 |
| No log | 11.96 | 48 | 1.7375 | 0.545 | 0.055 | 0.0775 |
| No log | 12.96 | 52 | 1.6524 | 0.5725 | 0.0575 | 0.095 |
| No log | 13.96 | 56 | 1.5703 | 0.565 | 0.0575 | 0.1175 |
| No log | 14.96 | 60 | 1.4606 | 0.6175 | 0.0575 | 0.1175 |
| No log | 15.96 | 64 | 1.3988 | 0.6525 | 0.055 | 0.1175 |
| No log | 16.96 | 68 | 1.3036 | 0.645 | 0.055 | 0.1175 |
| No log | 17.96 | 72 | 1.2528 | 0.6875 | 0.0575 | 0.12 |
| No log | 18.96 | 76 | 1.2113 | 0.69 | 0.0575 | 0.12 |
| No log | 19.96 | 80 | 1.1515 | 0.6975 | 0.0575 | 0.1175 |
| No log | 20.96 | 84 | 1.1117 | 0.71 | 0.0575 | 0.1175 |
| No log | 21.96 | 88 | 1.0811 | 0.7075 | 0.0575 | 0.1175 |
| No log | 22.96 | 92 | 1.0649 | 0.7075 | 0.0575 | 0.1175 |
| No log | 23.96 | 96 | 1.0342 | 0.7275 | 0.0575 | 0.1175 |
| No log | 24.96 | 100 | 0.9994 | 0.7275 | 0.0575 | 0.1175 |
| No log | 25.96 | 104 | 1.0332 | 0.7275 | 0.0575 | 0.1175 |
| No log | 26.96 | 108 | 0.9696 | 0.7325 | 0.06 | 0.1175 |
| No log | 27.96 | 112 | 0.9957 | 0.7325 | 0.06 | 0.1175 |
| No log | 28.96 | 116 | 0.9794 | 0.7325 | 0.06 | 0.1175 |
| No log | 29.96 | 120 | 0.9602 | 0.735 | 0.06 | 0.1175 |
| No log | 30.96 | 124 | 0.9900 | 0.75 | 0.06 | 0.12 |
| No log | 31.96 | 128 | 0.9649 | 0.74 | 0.06 | 0.1175 |
| No log | 32.96 | 132 | 0.9935 | 0.7325 | 0.06 | 0.12 |
| No log | 33.96 | 136 | 0.9637 | 0.7475 | 0.06 | 0.12 |
| No log | 34.96 | 140 | 0.9993 | 0.7325 | 0.06 | 0.12 |
| No log | 35.96 | 144 | 0.9841 | 0.7375 | 0.06 | 0.1225 |
| No log | 36.96 | 148 | 0.9719 | 0.7475 | 0.06 | 0.1225 |
| No log | 37.96 | 152 | 0.9901 | 0.745 | 0.06 | 0.125 |
| No log | 38.96 | 156 | 0.9692 | 0.7475 | 0.06 | 0.115 |
| No log | 39.96 | 160 | 0.9850 | 0.7425 | 0.06 | 0.1025 |
| No log | 40.96 | 164 | 0.9876 | 0.75 | 0.06 | 0.1 |
| No log | 41.96 | 168 | 0.9867 | 0.7525 | 0.06 | 0.095 |
| No log | 42.96 | 172 | 1.0042 | 0.7475 | 0.06 | 0.085 |
| No log | 43.96 | 176 | 1.0099 | 0.735 | 0.06 | 0.1025 |
| No log | 44.96 | 180 | 0.9988 | 0.755 | 0.06 | 0.1025 |
| No log | 45.96 | 184 | 1.0033 | 0.7475 | 0.06 | 0.1025 |
| No log | 46.96 | 188 | 1.0105 | 0.7475 | 0.06 | 0.105 |
| No log | 47.96 | 192 | 1.0103 | 0.75 | 0.06 | 0.1025 |
| No log | 48.96 | 196 | 1.0097 | 0.7575 | 0.06 | 0.11 |
| No log | 49.96 | 200 | 0.9975 | 0.755 | 0.06 | 0.11 |
| No log | 50.96 | 204 | 1.0095 | 0.76 | 0.06 | 0.115 |
| No log | 51.96 | 208 | 1.0203 | 0.7575 | 0.06 | 0.1275 |
| No log | 52.96 | 212 | 1.0304 | 0.745 | 0.06 | 0.1225 |
| No log | 53.96 | 216 | 1.0286 | 0.75 | 0.06 | 0.125 |
| No log | 54.96 | 220 | 1.0193 | 0.7575 | 0.06 | 0.1275 |
| No log | 55.96 | 224 | 1.0138 | 0.7575 | 0.06 | 0.13 |
| No log | 56.96 | 228 | 1.0168 | 0.76 | 0.06 | 0.1225 |
| No log | 57.96 | 232 | 1.0196 | 0.76 | 0.06 | 0.12 |
| No log | 58.96 | 236 | 1.0203 | 0.7575 | 0.06 | 0.12 |
| No log | 59.96 | 240 | 1.0199 | 0.7575 | 0.06 | 0.12 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
jordyvl/EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_7_gate
|
jordyvl
| 2023-12-08T08:55:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlmv3",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-04T20:54:04Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_7_gate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# EElayoutlmv3_jordyvl_rvl_cdip_100_examples_per_class_2023-12-04_txt_vis_concat_enc_7_gate
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9969
- Accuracy: 0.7375
- Exit 0 Accuracy: 0.0575
- Exit 1 Accuracy: 0.135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 24
- total_train_batch_size: 192
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Exit 0 Accuracy | Exit 1 Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:---------------:|
| No log | 0.96 | 4 | 2.7554 | 0.09 | 0.04 | 0.07 |
| No log | 1.96 | 8 | 2.7058 | 0.15 | 0.0475 | 0.0625 |
| No log | 2.96 | 12 | 2.6226 | 0.1675 | 0.05 | 0.0625 |
| No log | 3.96 | 16 | 2.5435 | 0.19 | 0.0525 | 0.0625 |
| No log | 4.96 | 20 | 2.4855 | 0.2075 | 0.0525 | 0.0625 |
| No log | 5.96 | 24 | 2.3833 | 0.2925 | 0.0525 | 0.065 |
| No log | 6.96 | 28 | 2.2843 | 0.3375 | 0.0525 | 0.0725 |
| No log | 7.96 | 32 | 2.1867 | 0.3525 | 0.0525 | 0.0875 |
| No log | 8.96 | 36 | 2.0982 | 0.3775 | 0.0525 | 0.09 |
| No log | 9.96 | 40 | 1.9870 | 0.4375 | 0.055 | 0.09 |
| No log | 10.96 | 44 | 1.8947 | 0.4575 | 0.055 | 0.0725 |
| No log | 11.96 | 48 | 1.7869 | 0.495 | 0.055 | 0.065 |
| No log | 12.96 | 52 | 1.7034 | 0.54 | 0.0575 | 0.0575 |
| No log | 13.96 | 56 | 1.6100 | 0.57 | 0.0575 | 0.055 |
| No log | 14.96 | 60 | 1.5372 | 0.5675 | 0.0575 | 0.065 |
| No log | 15.96 | 64 | 1.4592 | 0.61 | 0.0575 | 0.07 |
| No log | 16.96 | 68 | 1.4024 | 0.6125 | 0.0575 | 0.08 |
| No log | 17.96 | 72 | 1.3378 | 0.64 | 0.0575 | 0.075 |
| No log | 18.96 | 76 | 1.2934 | 0.66 | 0.0575 | 0.085 |
| No log | 19.96 | 80 | 1.2202 | 0.655 | 0.0575 | 0.09 |
| No log | 20.96 | 84 | 1.1826 | 0.6725 | 0.055 | 0.085 |
| No log | 21.96 | 88 | 1.1325 | 0.6925 | 0.055 | 0.09 |
| No log | 22.96 | 92 | 1.0960 | 0.7275 | 0.055 | 0.0875 |
| No log | 23.96 | 96 | 1.0852 | 0.69 | 0.055 | 0.0925 |
| No log | 24.96 | 100 | 1.0503 | 0.7075 | 0.055 | 0.0875 |
| No log | 25.96 | 104 | 1.0417 | 0.705 | 0.055 | 0.0975 |
| No log | 26.96 | 108 | 1.0174 | 0.71 | 0.055 | 0.095 |
| No log | 27.96 | 112 | 0.9902 | 0.715 | 0.055 | 0.1 |
| No log | 28.96 | 116 | 1.0088 | 0.6975 | 0.055 | 0.1025 |
| No log | 29.96 | 120 | 0.9992 | 0.7125 | 0.055 | 0.1075 |
| No log | 30.96 | 124 | 0.9688 | 0.72 | 0.0575 | 0.1125 |
| No log | 31.96 | 128 | 0.9745 | 0.7275 | 0.0575 | 0.11 |
| No log | 32.96 | 132 | 0.9688 | 0.7175 | 0.055 | 0.1125 |
| No log | 33.96 | 136 | 0.9720 | 0.7225 | 0.055 | 0.11 |
| No log | 34.96 | 140 | 0.9515 | 0.7375 | 0.055 | 0.1125 |
| No log | 35.96 | 144 | 0.9717 | 0.73 | 0.055 | 0.115 |
| No log | 36.96 | 148 | 0.9686 | 0.7225 | 0.055 | 0.1125 |
| No log | 37.96 | 152 | 0.9646 | 0.7275 | 0.055 | 0.115 |
| No log | 38.96 | 156 | 0.9522 | 0.7375 | 0.055 | 0.1225 |
| No log | 39.96 | 160 | 0.9844 | 0.735 | 0.055 | 0.12 |
| No log | 40.96 | 164 | 0.9815 | 0.745 | 0.055 | 0.1075 |
| No log | 41.96 | 168 | 0.9866 | 0.7225 | 0.055 | 0.115 |
| No log | 42.96 | 172 | 0.9835 | 0.7325 | 0.055 | 0.115 |
| No log | 43.96 | 176 | 0.9902 | 0.74 | 0.055 | 0.11 |
| No log | 44.96 | 180 | 0.9843 | 0.745 | 0.055 | 0.115 |
| No log | 45.96 | 184 | 1.0099 | 0.725 | 0.055 | 0.1225 |
| No log | 46.96 | 188 | 0.9917 | 0.7325 | 0.0575 | 0.125 |
| No log | 47.96 | 192 | 0.9817 | 0.735 | 0.0575 | 0.13 |
| No log | 48.96 | 196 | 1.0048 | 0.73 | 0.055 | 0.13 |
| No log | 49.96 | 200 | 0.9973 | 0.7375 | 0.0575 | 0.1275 |
| No log | 50.96 | 204 | 0.9899 | 0.7375 | 0.0575 | 0.1275 |
| No log | 51.96 | 208 | 0.9913 | 0.7375 | 0.0575 | 0.1275 |
| No log | 52.96 | 212 | 0.9975 | 0.735 | 0.0575 | 0.13 |
| No log | 53.96 | 216 | 1.0028 | 0.7325 | 0.0575 | 0.13 |
| No log | 54.96 | 220 | 0.9928 | 0.735 | 0.0575 | 0.1275 |
| No log | 55.96 | 224 | 0.9866 | 0.735 | 0.0575 | 0.1325 |
| No log | 56.96 | 228 | 0.9884 | 0.7375 | 0.0575 | 0.1275 |
| No log | 57.96 | 232 | 0.9943 | 0.7425 | 0.0575 | 0.13 |
| No log | 58.96 | 236 | 0.9967 | 0.74 | 0.0575 | 0.13 |
| No log | 59.96 | 240 | 0.9969 | 0.7375 | 0.0575 | 0.135 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
kishore2/zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList
|
kishore2
| 2023-12-08T08:54:58Z | 0 | 0 | null |
[
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ",
"base_model:finetune:TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T08:43:45Z |
---
license: apache-2.0
base_model: TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ
tags:
- generated_from_trainer
model-index:
- name: zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7B-OH2p5-alpha-tags-86-FT-prompt3-recHyperparams_50Steps_randTagList
This model is a fine-tuned version of [TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GPTQ) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
rpratap2102/The_Misfits
|
rpratap2102
| 2023-12-08T08:54:07Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"financial-sentiment-analysis",
"sentiment-analysis",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T08:29:13Z |
---
language: "en"
tags:
- financial-sentiment-analysis
- sentiment-analysis
widget:
- text: "growth is strong and we have plenty of liquidity"
---
`FinBERT` is a BERT model pre-trained on financial communication text. The purpose is to enhance financial NLP research and practice. It is trained on the following three financial communication corpus. The total corpora size is 4.9B tokens.
- Corporate Reports 10-K & 10-Q: 2.5B tokens
- Earnings Call Transcripts: 1.3B tokens
- Analyst Reports: 1.1B tokens
More technical details on `FinBERT`: [Click Link](https://github.com/yya518/FinBERT)
This released `finbert-tone` model is the `FinBERT` model fine-tuned on 10,000 manually annotated (positive, negative, neutral) sentences from analyst reports. This model achieves superior performance on financial tone analysis task. If you are simply interested in using `FinBERT` for financial tone analysis, give it a try.
# How to use
You can use this model with Transformers pipeline for sentiment analysis.
```python
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import pipeline
finbert = BertForSequenceClassification.from_pretrained('rpratap2102/The_Misfits',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('rpratap2102/The_Misfits')
nlp = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer)
sentences = ["there is a shortage of capital, and we need extra financing",
"growth is strong and we have plenty of liquidity",
"there are doubts about our finances",
"profits are flat"]
results = nlp(sentences)
print(results) #LABEL_0: neutral; LABEL_1: positive; LABEL_2: negative
```
|
mehmetbh/LunarLanderFirstTrial
|
mehmetbh
| 2023-12-08T08:48:28Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T08:47:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.62 +/- 22.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Wolverine01/ppo-Huggy
|
Wolverine01
| 2023-12-08T08:43:44Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-12-02T11:41:15Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Wolverine01/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
multimodalart/polipure-repeats-3-steps-750-rank64-no-prior
|
multimodalart
| 2023-12-08T08:40:51Z | 12 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:51:22Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
license: openrail++
widget:
- text: '<s0><s1>'
---
# SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-750-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/polipure-repeats-3-steps-750-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-750-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-750-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('<s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/polipure-repeats-3-steps-750-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Rompo/Rompov
|
Rompo
| 2023-12-08T08:17:26Z | 0 | 2 | null |
[
"music",
"image-to-3d",
"en",
"bg",
"region:us"
] |
image-to-3d
| 2023-12-08T08:13:27Z |
---
language:
- en
- bg
metrics:
- accuracy
- character
pipeline_tag: image-to-3d
tags:
- music
---
|
brettbbb/cs_cot_16
|
brettbbb
| 2023-12-08T08:16:15Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T08:13:44Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: cs_cot_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cs_cot_16
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
UnbrokenXM/whisper-base-Chinese-20
|
UnbrokenXM
| 2023-12-08T08:07:36Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-base",
"base_model:adapter:openai/whisper-base",
"region:us"
] | null | 2023-12-08T08:07:35Z |
---
library_name: peft
base_model: openai/whisper-base
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
Paul333/lyt_results
|
Paul333
| 2023-12-08T08:00:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilbert/distilgpt2",
"base_model:finetune:distilbert/distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T07:54:16Z |
---
license: apache-2.0
base_model: distilgpt2
tags:
- generated_from_trainer
model-index:
- name: lyt_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lyt_results
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 283 | 3.7645 |
| 3.8884 | 2.0 | 566 | 3.7468 |
| 3.8884 | 3.0 | 849 | 3.7435 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.13.1
|
Phiby/superwomen
|
Phiby
| 2023-12-08T07:56:53Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T07:31:11Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: superwomen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# superwomen
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 469 | 2.7836 | 0.778 |
| 3.9815 | 2.0 | 938 | 1.2347 | 0.898 |
| 1.875 | 3.0 | 1407 | 0.6481 | 0.9273 |
| 0.8377 | 4.0 | 1876 | 0.4547 | 0.9393 |
| 0.4736 | 5.0 | 2345 | 0.4064 | 0.942 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mizy/oppo-7b-q4_0-gguf
|
mizy
| 2023-12-08T07:55:21Z | 1 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2023-11-21T11:12:45Z |
from [oppo-experiemental-7B](https://huggingface.co/OPPOResearchInstitute/oppo-experiemental-7B)
|
multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior
|
multimodalart
| 2023-12-08T07:41:28Z | 5 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:57:24Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-4-steps-1200-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
alisontsao/my-awesome-text-classification
|
alisontsao
| 2023-12-08T07:39:55Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T07:29:44Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my-awesome-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 469 | 2.7836 | 0.778 |
| 3.9815 | 2.0 | 938 | 1.2347 | 0.898 |
| 1.875 | 3.0 | 1407 | 0.6481 | 0.9273 |
| 0.8377 | 4.0 | 1876 | 0.4547 | 0.9393 |
| 0.4736 | 5.0 | 2345 | 0.4064 | 0.942 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior
|
multimodalart
| 2023-12-08T07:38:47Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:58:36Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-3-steps-1200-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior
|
multimodalart
| 2023-12-08T07:33:26Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:52:40Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('<s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/polipure-repeats-3-steps-1200-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
JuanMa360/room-classification
|
JuanMa360
| 2023-12-08T07:32:42Z | 4,587 | 4 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-09-15T17:50:28Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: room-classification
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8650000095367432
---
# room-classification
House & Apartaments Classification model🤗🖼️
## Example Images
#### Exterior

#### closets

#### kitchen

#### others

|
Ketak-ZoomRx/Drug_67k_pyt
|
Ketak-ZoomRx
| 2023-12-08T07:32:11Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"gpt",
"llm",
"large language model",
"h2o-llmstudio",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2023-12-08T07:31:21Z |
---
language:
- en
library_name: transformers
tags:
- gpt
- llm
- large language model
- h2o-llmstudio
inference: false
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
---
# Model Card
## Summary
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
- Base model: [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped)
## Usage
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed.
```bash
pip install transformers==4.29.2
pip install einops==0.6.1
pip install accelerate==0.19.0
pip install torch==2.0.0
```
```python
import torch
from transformers import pipeline
generate_text = pipeline(
model="Ketak-ZoomRx/Drug_67k_pyt",
torch_dtype="auto",
trust_remote_code=True,
use_fast=True,
device_map={"": "cuda:0"},
)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:
```python
print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"])
```
```bash
<|prompt|>Why is drinking water so healthy?<|endoftext|><|answer|>
```
Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`.
```python
import torch
from h2oai_pipeline import H2OTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"Ketak-ZoomRx/Drug_67k_pyt",
use_fast=True,
padding_side="left",
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
"Ketak-ZoomRx/Drug_67k_pyt",
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer)
res = generate_text(
"Why is drinking water so healthy?",
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)
print(res[0]["generated_text"])
```
You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Ketak-ZoomRx/Drug_67k_pyt" # either local folder or huggingface model name
# Important: The prompt needs to be in the same format the model was trained with.
# You can find an example prompt in the experiment logs.
prompt = "<|prompt|>How are you?<|endoftext|><|answer|>"
tokenizer = AutoTokenizer.from_pretrained(
model_name,
use_fast=True,
trust_remote_code=True,
)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map={"": "cuda:0"},
trust_remote_code=True,
)
model.cuda().eval()
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
# generate configuration can be modified to your needs
tokens = model.generate(
input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
min_new_tokens=2,
max_new_tokens=256,
do_sample=False,
num_beams=1,
temperature=float(0.0),
repetition_penalty=float(1.2),
renormalize_logits=True
)[0]
tokens = tokens[inputs["input_ids"].shape[1]:]
answer = tokenizer.decode(tokens, skip_special_tokens=True)
print(answer)
```
## Quantization and sharding
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
## Model Architecture
```
GPTNeoXForCausalLM(
(gpt_neox): GPTNeoXModel(
(embed_in): Embedding(50304, 2560)
(layers): ModuleList(
(0-31): 32 x GPTNeoXLayer(
(input_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(post_attention_layernorm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
(attention): GPTNeoXAttention(
(rotary_emb): RotaryEmbedding()
(query_key_value): Linear(in_features=2560, out_features=7680, bias=True)
(dense): Linear(in_features=2560, out_features=2560, bias=True)
)
(mlp): GPTNeoXMLP(
(dense_h_to_4h): Linear(in_features=2560, out_features=10240, bias=True)
(dense_4h_to_h): Linear(in_features=10240, out_features=2560, bias=True)
(act): GELUActivation()
)
)
)
(final_layer_norm): LayerNorm((2560,), eps=1e-05, elementwise_affine=True)
)
(embed_out): Linear(in_features=2560, out_features=50304, bias=False)
)
```
## Model Configuration
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
## Disclaimer
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
jiangminghan/model_dog
|
jiangminghan
| 2023-12-08T07:32:07Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-12-07T12:52:21Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - jiangminghan/model_dog
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
cwchang/my-awesome-tx-v2
|
cwchang
| 2023-12-08T07:29:09Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T07:28:52Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my-awesome-tx-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-tx-v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3863
- Accuracy: 0.9493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 469 | 2.6771 | 0.816 |
| 3.9145 | 2.0 | 938 | 1.1744 | 0.901 |
| 1.7848 | 3.0 | 1407 | 0.6144 | 0.9333 |
| 0.7926 | 4.0 | 1876 | 0.4327 | 0.9477 |
| 0.4466 | 5.0 | 2345 | 0.3863 | 0.9493 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Owlcateagle/my-awesome-text-classification
|
Owlcateagle
| 2023-12-08T07:27:54Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T07:27:43Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my-awesome-text-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my-awesome-text-classification
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4472
- Accuracy: 0.942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 469 | 2.7365 | 0.807 |
| 3.9635 | 2.0 | 938 | 1.1976 | 0.9023 |
| 1.8301 | 3.0 | 1407 | 0.6323 | 0.923 |
| 0.8128 | 4.0 | 1876 | 0.4472 | 0.942 |
| 0.4605 | 5.0 | 2345 | 0.4010 | 0.941 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior
|
multimodalart
| 2023-12-08T07:26:02Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:57:24Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-4-steps-750-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
breezedeus/coin-clip-vit-base-patch32
|
breezedeus
| 2023-12-08T07:23:59Z | 45 | 3 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"vision",
"coin",
"coin-retrieval",
"coin-recognition",
"coin-search-engine",
"multi-modal learning",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2023-11-26T07:48:48Z |
---
tags:
- vision
- coin
- clip
- coin-retrieval
- coin-recognition
- coin-search-engine
- multi-modal learning
widget:
- src: >-
https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
license: apache-2.0
library_name: transformers
pipeline_tag: feature-extraction
---
# Coin-CLIP 🪙 : Enhancing Coin Image Retrieval with CLIP
## Model Details / 模型细节
This model (**Coin-CLIP**) is built upon
OpenAI's **[CLIP](https://huggingface.co/openai/clip-vit-base-patch32) (ViT-B/32)** model and fine-tuned on
a dataset of more than `340,000` coin images using contrastive learning techniques. This specialized model is designed to significantly improve feature extraction for coin images, leading to more accurate image-based search capabilities. Coin-CLIP combines the power of Visual Transformer (ViT) with CLIP's multimodal learning capabilities, specifically tailored for the numismatic domain.
**Key Features:**
- State-of-the-art coin image retrieval;
- Enhanced feature extraction for numismatic images;
- Seamless integration with CLIP's multimodal learning.
本模型(**Coin-CLIP**)
在 OpenAI 的 **[CLIP](https://huggingface.co/openai/clip-vit-base-patch32) (ViT-B/32)** 模型基础上,利用对比学习技术在超过 `340,000` 张硬币图片数据上微调得到的。
**Coin-CLIP** 旨在提高模型针对硬币图片的特征提取能力,从而实现更准确的以图搜图功能。该模型结合了视觉变换器(ViT)的强大功能和 CLIP 的多模态学习能力,并专门针对硬币图片进行了优化。
## Comparison: Coin-CLIP vs. CLIP / 效果对比
#### Example 1 (Left: Coin-CLIP; Right: CLIP)

#### Example 2 (Left: Coin-CLIP; Right: CLIP)

More examples can be found: [breezedeus/Coin-CLIP: Coin CLIP](https://github.com/breezedeus/Coin-CLIP) .
## Usage and Limitations / 使用和限制
- **Usage**: This model is primarily used for extracting representation vectors from coin images, enabling efficient and precise image-based searches in a coin image database.
- **Limitations**: As the model is trained specifically on coin images, it may not perform well on non-coin images.
- **用途**:此模型主要用于提取硬币图片的表示向量,以实现在硬币图像库中进行高效、精确的以图搜图。
- **限制**:由于模型是针对硬币图像进行训练的,因此在处理非硬币图像时可能效果不佳。
## Documents / 文档
- Base Model: [openai/clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)
## Model Use / 模型使用
### Transformers
```python
from PIL import Image
import requests
import torch.nn.functional as F
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("breezedeus/coin-clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("breezedeus/coin-clip-vit-base-patch32")
image_fp = "path/to/coin_image.jpg"
image = Image.open(image_fp).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
img_features = model.get_image_features(**inputs)
img_features = F.normalize(img_features, dim=1)
```
### Tool / 工具
To further simplify the use of the **Coin-CLIP** model, we provide a simple Python library [breezedeus/Coin-CLIP: Coin CLIP](https://github.com/breezedeus/Coin-CLIP) for quickly building a coin image retrieval engine.
为了进一步简化 **Coin-CLIP** 模型的使用,我们提供了一个简单的 Python 库 [breezedeus/Coin-CLIP: Coin CLIP](https://github.com/breezedeus/Coin-CLIP),以便快速构建硬币图像检索引擎。
#### Install
```bash
pip install coin_clip
```
#### Extract Feature Vectors
```python
from coin_clip import CoinClip
# Automatically download the model from Huggingface
model = CoinClip(model_name='breezedeus/coin-clip-vit-base-patch32')
images = ['examples/10_back.jpg', 'examples/16_back.jpg']
img_feats, success_ids = model.get_image_features(images)
print(img_feats.shape) # --> (2, 512)
```
More Tools can be found: [breezedeus/Coin-CLIP: Coin CLIP](https://github.com/breezedeus/Coin-CLIP) .
## Training Data / 训练数据
The model was trained on a specialized coin image dataset. This dataset includes images of various currencies' coins.
本模型使用的是专门的硬币图像数据集进行训练。这个数据集包含了多种货币的硬币图片。
## Training Process / 训练过程
The model was fine-tuned on the OpenAI CLIP (ViT-B/32) pretrained model using a coin image dataset. The training process involved Contrastive Learning fine-tuning techniques and parameter settings.
模型是在 OpenAI 的 CLIP (ViT-B/32) 预训练模型的基础上,使用硬币图像数据集进行微调。训练过程采用了对比学习的微调技巧和参数设置。
## Performance / 性能
This model demonstrates excellent performance in coin image retrieval tasks.
该模型在硬币图像检索任务上展现了优异的性能。
## Feedback / 反馈
> Where to send questions or comments about the model.
Welcome to contact the author [Breezedeus](https://www.breezedeus.com/join-group).
欢迎联系作者 [Breezedeus](https://www.breezedeus.com/join-group) 。
|
multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior
|
multimodalart
| 2023-12-08T07:22:11Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:55:10Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-3-steps-750-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior
|
multimodalart
| 2023-12-08T07:17:26Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:58:05Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-4-steps-500-rank64-no-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03
|
multimodalart
| 2023-12-08T07:16:58Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:58:23Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A photo of <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03
<Gallery />
## Model description
### These are multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('A photo of <s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/poliphoto-repeats-3-steps-500-rank64-no-prior-pivot-03/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
multimodalart/polipure-repeats-2-steps-500-rank64-with-prior
|
multimodalart
| 2023-12-08T07:16:46Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:44:19Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/polipure-repeats-2-steps-500-rank64-with-prior
<Gallery />
## Model description
### These are multimodalart/polipure-repeats-2-steps-500-rank64-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polipure-repeats-2-steps-500-rank64-with-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-2-steps-500-rank64-with-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('<s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/polipure-repeats-2-steps-500-rank64-with-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
multimodalart/polipure-repeats-4-steps-500-rank64-with-prior
|
multimodalart
| 2023-12-08T07:15:16Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2023-12-08T06:42:57Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: <s0><s1>
license: openrail++
---
# SDXL LoRA DreamBooth - multimodalart/polipure-repeats-4-steps-500-rank64-with-prior
<Gallery />
## Model description
### These are multimodalart/polipure-repeats-4-steps-500-rank64-with-prior LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('multimodalart/polipure-repeats-4-steps-500-rank64-with-prior', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='multimodalart/polipure-repeats-4-steps-500-rank64-with-prior', filename="embeddings.safetensors", repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipe.text_encoder_2, tokenizer=pipe.tokenizer_2)
image = pipeline('<s0><s1>').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- Download the LoRA *.safetensors [here](/multimodalart/polipure-repeats-4-steps-500-rank64-with-prior/blob/main/pytorch_lora_weights.safetensors). Rename it and place it on your Lora folder.
- Download the text embeddings *.safetensors [here](/multimodalart/polipure-repeats-4-steps-500-rank64-with-prior/blob/main/embeddings.safetensors). Rename it and place it on it on your embeddings folder.
All [Files & versions](/multimodalart/polipure-repeats-4-steps-500-rank64-with-prior/tree/main).
## Details
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
|
Lew/ppo-SnowballTarget
|
Lew
| 2023-12-08T07:10:13Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-12-08T07:10:10Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Lew/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
hahahafofo/Qwen-1_8B-Stable-Diffusion-Prompt
|
hahahafofo
| 2023-12-08T07:07:48Z | 49 | 12 |
transformers
|
[
"transformers",
"safetensors",
"qwen",
"text-generation",
"custom_code",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2023-12-08T07:01:59Z |
---
license: apache-2.0
---
模型使用在生成稳定扩散(Stable Diffusion)的提示语(prompt)。它主要通过中文生成相应的英文提示语,充分利用Qwen的能力,支持包括古诗词在内的多种形式进行提示语的生成。此模型基于35k 条数据进行特定于任务的微调(SFT)
模型的对应代码已发布于GitHub,您可以通过以下链接访问并使用SDXL-Turbo进行快速测试生成的提示语:
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
This model is designed to generate prompts for Stable Diffusion. It primarily translates Chinese into corresponding English prompts, leveraging the capabilities of Qwen, and supports a variety of formats including classical poetry for prompt generation. The model has been fine-tuned specifically for this task (SFT) using a dataset of 35k entries.
The corresponding code for the model is available on GitHub. You can access and use SDXL-Turbo for rapid test generation of prompts through the following link:
[https://github.com/zhongpei/Qwen-SDXL-Turbo.git](https://github.com/zhongpei/Qwen-SDXL-Turbo.git)
**参数表格(Parameters Table):**
| 参数 | 值 |
| -------------- | -------- |
| 数据量(Data) | 35k |
| 训练周期(Epochs) | 1 |
| 学习率(Learning Rate) | 1e-05 |
| 损失值(Loss) | 1.03 |
|
traromal/mistralengft
|
traromal
| 2023-12-08T06:50:48Z | 4 | 1 | null |
[
"safetensors",
"autotrain",
"text-generation",
"license:other",
"region:us"
] |
text-generation
| 2023-12-07T16:34:21Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
f1avj0/GAN-Diffusion
|
f1avj0
| 2023-12-08T06:47:38Z | 0 | 0 | null |
[
"license:afl-3.0",
"region:us"
] | null | 2023-12-08T06:41:42Z |
---
license: afl-3.0
---
Final project for HPML Spring 23 at NYU The project we decided to work with is about the optimization of diffusion networks. As we will see in the latter sections of the report, the main idea of the project revolves around the possible ways to optimize the training of diffusion networks, by making use of PyTorch Profiling.
RESULTS:
The convolutional backpropagation is the main bottleneck.
Having AMP vs no AMP sped up the CPU runtime for 1 GPU, it also slightly improved losses.
2 GPUs gave a slight speedup compared to 1 GPU
On 2 GPUs, AMP didn’t improve runtimes
The convolutional backpropagation is the main bottleneck.
In two GPUs, data is parallelized, so the model spends less time on backprop.
Stats: 1-1 GPU with AMP : CPU 28.3 CUDA 25.5 Total runtime - 502 2-1 GPU w/out AMP : CPU 42.4 CUDA 41.09 Total runtime - 428 3-2 GPUs with AMP : CPU 39.7 CUDA 25.4 Total runtime - 572 3-2 GPUs w/out AMP : CPU 38.2 CUDA 25.3 Total runtime - 559
|
Mattffc4/q-FrozenLake-v1-4x4-noSlippery
|
Mattffc4
| 2023-12-08T06:42:15Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T06:42:12Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Mattffc4/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
saksuke/DeydooGPT_KB
|
saksuke
| 2023-12-08T06:39:55Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2023-12-07T09:59:22Z |
---
license: mit
---
https://neuml.github.io/txtai/embeddings/
TxtAI embedding store for my knowledge base
|
noamsmi/ppo-LunarLander-v2
|
noamsmi
| 2023-12-08T06:38:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T06:36:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.92 +/- 24.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
afrideva/Ferret-3B-GGUF
|
afrideva
| 2023-12-08T06:14:12Z | 31 | 2 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"dataset:euclaise/MiniCoT",
"dataset:euclaise/SciCoT",
"dataset:euclaise/symtune_mini",
"dataset:euclaise/mathoverflow-accepted",
"dataset:euirim/goodwiki",
"base_model:euclaise/Ferret-3B",
"base_model:quantized:euclaise/Ferret-3B",
"region:us",
"conversational"
] |
text-generation
| 2023-12-08T06:05:07Z |
---
base_model: euclaise/Ferret-3B
datasets:
- euclaise/MiniCoT
- euclaise/SciCoT
- euclaise/symtune_mini
- euclaise/mathoverflow-accepted
- euirim/goodwiki
inference: false
model_creator: euclaise
model_name: Ferret-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# euclaise/Ferret-3B-GGUF
Quantized GGUF model files for [Ferret-3B](https://huggingface.co/euclaise/Ferret-3B) from [euclaise](https://huggingface.co/euclaise)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ferret-3b.fp16.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.fp16.gguf) | fp16 | 5.59 GB |
| [ferret-3b.q2_k.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q2_k.gguf) | q2_k | 1.20 GB |
| [ferret-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
| [ferret-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
| [ferret-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
| [ferret-3b.q6_k.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q6_k.gguf) | q6_k | 2.30 GB |
| [ferret-3b.q8_0.gguf](https://huggingface.co/afrideva/Ferret-3B-GGUF/resolve/main/ferret-3b.q8_0.gguf) | q8_0 | 2.97 GB |
## Original Model Card:
|
GOT-HW/12345
|
GOT-HW
| 2023-12-08T06:14:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"chemistry",
"token-classification",
"zh",
"dataset:HuggingFaceH4/no_robots",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2023-08-16T13:40:58Z |
---
license: apache-2.0
datasets:
- HuggingFaceH4/no_robots
language:
- zh
metrics:
- bleurt
library_name: diffusers
pipeline_tag: token-classification
tags:
- chemistry
---
|
omersubasi/xlm-roberta-base-finetuned-panx-en
|
omersubasi
| 2023-12-08T06:02:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T05:51:53Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6837988826815643
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
- F1: 0.6838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1357 | 1.0 | 50 | 0.5871 | 0.4590 |
| 0.5236 | 2.0 | 100 | 0.4412 | 0.6478 |
| 0.3765 | 3.0 | 150 | 0.3984 | 0.6838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
Rafaelrosendo1/whisper2
|
Rafaelrosendo1
| 2023-12-08T06:00:26Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-04T11:38:38Z |
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper2
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2574
- Wer: 96.9001
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1143 | 1.27 | 1000 | 0.2184 | 56.9383 |
| 0.0524 | 2.54 | 2000 | 0.2134 | 54.9835 |
| 0.024 | 3.82 | 3000 | 0.2309 | 61.1703 |
| 0.0076 | 5.09 | 4000 | 0.2574 | 96.9001 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
omersubasi/xlm-roberta-base-finetuned-panx-it
|
omersubasi
| 2023-12-08T05:57:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T05:52:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8218390804597702
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2503
- F1: 0.8218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8253 | 1.0 | 70 | 0.3503 | 0.7160 |
| 0.2781 | 2.0 | 140 | 0.2643 | 0.8148 |
| 0.1871 | 3.0 | 210 | 0.2503 | 0.8218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
Gizachew/amharic_hate_speech-PEFT
|
Gizachew
| 2023-12-08T05:56:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Davlan/bert-base-multilingual-cased-finetuned-amharic",
"base_model:adapter:Davlan/bert-base-multilingual-cased-finetuned-amharic",
"region:us"
] | null | 2023-12-08T05:56:09Z |
---
library_name: peft
base_model: Davlan/bert-base-multilingual-cased-finetuned-amharic
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
omersubasi/xlm-roberta-base-finetuned-panx-fr
|
omersubasi
| 2023-12-08T05:54:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T05:51:38Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8115649689023365
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3184
- F1: 0.8116
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7671 | 1.0 | 96 | 0.3643 | 0.7537 |
| 0.325 | 2.0 | 192 | 0.3360 | 0.7977 |
| 0.2209 | 3.0 | 288 | 0.3184 | 0.8116 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
omersubasi/xlm-roberta-base-finetuned-panx-de-fr
|
omersubasi
| 2023-12-08T05:42:49Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-12-08T05:30:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1606
- F1: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2873 | 1.0 | 715 | 0.1802 | 0.8245 |
| 0.1446 | 2.0 | 1430 | 0.1601 | 0.8512 |
| 0.0925 | 3.0 | 2145 | 0.1606 | 0.8620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.1.0+cu118
- Datasets 1.16.1
- Tokenizers 0.15.0
|
theshresthshukla/Distort-Image
|
theshresthshukla
| 2023-12-08T05:33:18Z | 0 | 0 | null |
[
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T05:19:14Z |
---
license: apache-2.0
language:
- en
---
|
alexkarpekov/dlt_llama2_alpaca
|
alexkarpekov
| 2023-12-08T05:29:28Z | 2 | 0 |
peft
|
[
"peft",
"region:us"
] | null | 2023-12-08T05:29:26Z |
---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
pratikthakkar007/fz61-9i4h-j3s8-0
|
pratikthakkar007
| 2023-12-08T05:28:35Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-08T05:27:08Z |
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
KelvinLLL/560m_LORA_TUNING_CAUSAL_LM_10epoch
|
KelvinLLL
| 2023-12-08T05:28:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigscience/bloomz-560m",
"base_model:adapter:bigscience/bloomz-560m",
"region:us"
] | null | 2023-11-14T19:57:40Z |
---
library_name: peft
base_model: bigscience/bloomz-560m
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0
|
leejunhyeok/MoMo-70B-LoRA-V1.2_1
|
leejunhyeok
| 2023-12-08T05:27:02Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"license:llama2",
"endpoints_compatible",
"region:us"
] | null | 2023-12-08T04:43:54Z |
---
license: llama2
language:
- en
library_name: transformers
---
## Dataset Details
### Used Datasets
- Orca-style dataset
- Alpaca-style dataset
- No other dataset was used except for the dataset mentioned above
- No benchmark test set or the training set are used
## Prompt Template
### Alpaca-style
|
alinerodrigues/wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
|
alinerodrigues
| 2023-12-08T05:25:14Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2023-12-08T00:12:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-mecita-coraa-portuguese-2-all-07
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
- Wer: 0.0921
- Cer: 0.0289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 28.6541 | 1.0 | 86 | 3.2004 | 1.0 | 1.0 |
| 7.0678 | 2.0 | 172 | 2.9330 | 1.0 | 1.0 |
| 3.0229 | 3.0 | 258 | 2.9009 | 1.0 | 1.0 |
| 2.931 | 4.0 | 344 | 2.9000 | 1.0 | 1.0 |
| 2.9177 | 5.0 | 430 | 2.8428 | 1.0 | 1.0 |
| 2.6907 | 6.0 | 516 | 1.2090 | 0.9756 | 0.2837 |
| 1.3248 | 7.0 | 602 | 0.5202 | 0.2478 | 0.0702 |
| 1.3248 | 8.0 | 688 | 0.3687 | 0.1941 | 0.0567 |
| 0.7818 | 9.0 | 774 | 0.3006 | 0.1641 | 0.0482 |
| 0.5879 | 10.0 | 860 | 0.2740 | 0.1586 | 0.0476 |
| 0.5164 | 11.0 | 946 | 0.2360 | 0.1508 | 0.0446 |
| 0.4796 | 12.0 | 1032 | 0.2204 | 0.1385 | 0.0423 |
| 0.4054 | 13.0 | 1118 | 0.2071 | 0.1337 | 0.0399 |
| 0.3639 | 14.0 | 1204 | 0.2009 | 0.1148 | 0.0365 |
| 0.3639 | 15.0 | 1290 | 0.1918 | 0.1201 | 0.0370 |
| 0.3318 | 16.0 | 1376 | 0.1841 | 0.1073 | 0.0342 |
| 0.3192 | 17.0 | 1462 | 0.1804 | 0.1064 | 0.0344 |
| 0.2857 | 18.0 | 1548 | 0.1783 | 0.1030 | 0.0336 |
| 0.2867 | 19.0 | 1634 | 0.1730 | 0.1037 | 0.0331 |
| 0.2703 | 20.0 | 1720 | 0.1643 | 0.0993 | 0.0323 |
| 0.2625 | 21.0 | 1806 | 0.1624 | 0.1030 | 0.0320 |
| 0.2625 | 22.0 | 1892 | 0.1676 | 0.1003 | 0.0319 |
| 0.255 | 23.0 | 1978 | 0.1610 | 0.1003 | 0.0317 |
| 0.241 | 24.0 | 2064 | 0.1628 | 0.0981 | 0.0314 |
| 0.2422 | 25.0 | 2150 | 0.1607 | 0.0967 | 0.0312 |
| 0.2293 | 26.0 | 2236 | 0.1582 | 0.0957 | 0.0311 |
| 0.2222 | 27.0 | 2322 | 0.1550 | 0.0948 | 0.0304 |
| 0.227 | 28.0 | 2408 | 0.1564 | 0.0960 | 0.0310 |
| 0.227 | 29.0 | 2494 | 0.1601 | 0.0984 | 0.0310 |
| 0.2193 | 30.0 | 2580 | 0.1506 | 0.0911 | 0.0293 |
| 0.197 | 31.0 | 2666 | 0.1519 | 0.0948 | 0.0303 |
| 0.2087 | 32.0 | 2752 | 0.1510 | 0.0933 | 0.0304 |
| 0.1868 | 33.0 | 2838 | 0.1515 | 0.0984 | 0.0313 |
| 0.2023 | 34.0 | 2924 | 0.1538 | 0.0906 | 0.0296 |
| 0.206 | 35.0 | 3010 | 0.1473 | 0.0945 | 0.0299 |
| 0.206 | 36.0 | 3096 | 0.1473 | 0.0892 | 0.0292 |
| 0.2013 | 37.0 | 3182 | 0.1418 | 0.0928 | 0.0293 |
| 0.1733 | 38.0 | 3268 | 0.1459 | 0.0882 | 0.0282 |
| 0.1813 | 39.0 | 3354 | 0.1474 | 0.0904 | 0.0288 |
| 0.1903 | 40.0 | 3440 | 0.1423 | 0.0916 | 0.0294 |
| 0.1599 | 41.0 | 3526 | 0.1444 | 0.0914 | 0.0292 |
| 0.1576 | 42.0 | 3612 | 0.1439 | 0.0904 | 0.0287 |
| 0.1576 | 43.0 | 3698 | 0.1495 | 0.0882 | 0.0287 |
| 0.1603 | 44.0 | 3784 | 0.1407 | 0.0909 | 0.0294 |
| 0.1726 | 45.0 | 3870 | 0.1413 | 0.0897 | 0.0289 |
| 0.1582 | 46.0 | 3956 | 0.1416 | 0.0892 | 0.0284 |
| 0.1622 | 47.0 | 4042 | 0.1416 | 0.0894 | 0.0280 |
| 0.1581 | 48.0 | 4128 | 0.1404 | 0.0887 | 0.0284 |
| 0.1606 | 49.0 | 4214 | 0.1396 | 0.0894 | 0.0284 |
| 0.1422 | 50.0 | 4300 | 0.1458 | 0.0892 | 0.0281 |
| 0.1422 | 51.0 | 4386 | 0.1411 | 0.0919 | 0.0293 |
| 0.1611 | 52.0 | 4472 | 0.1417 | 0.0919 | 0.0293 |
| 0.1354 | 53.0 | 4558 | 0.1462 | 0.0909 | 0.0291 |
| 0.1433 | 54.0 | 4644 | 0.1382 | 0.0921 | 0.0289 |
| 0.145 | 55.0 | 4730 | 0.1424 | 0.0948 | 0.0293 |
| 0.1418 | 56.0 | 4816 | 0.1441 | 0.0921 | 0.0295 |
| 0.1544 | 57.0 | 4902 | 0.1435 | 0.0950 | 0.0293 |
| 0.1544 | 58.0 | 4988 | 0.1437 | 0.0940 | 0.0292 |
| 0.1342 | 59.0 | 5074 | 0.1468 | 0.0921 | 0.0289 |
| 0.1398 | 60.0 | 5160 | 0.1491 | 0.0923 | 0.0293 |
| 0.1325 | 61.0 | 5246 | 0.1451 | 0.0919 | 0.0293 |
| 0.1496 | 62.0 | 5332 | 0.1466 | 0.0926 | 0.0290 |
| 0.1283 | 63.0 | 5418 | 0.1447 | 0.0892 | 0.0286 |
| 0.1275 | 64.0 | 5504 | 0.1431 | 0.0868 | 0.0276 |
| 0.1275 | 65.0 | 5590 | 0.1452 | 0.0856 | 0.0271 |
| 0.126 | 66.0 | 5676 | 0.1463 | 0.0919 | 0.0284 |
| 0.1404 | 67.0 | 5762 | 0.1470 | 0.0911 | 0.0283 |
| 0.1446 | 68.0 | 5848 | 0.1444 | 0.0914 | 0.0282 |
| 0.1357 | 69.0 | 5934 | 0.1443 | 0.0906 | 0.0275 |
| 0.1269 | 70.0 | 6020 | 0.1440 | 0.0894 | 0.0276 |
| 0.1333 | 71.0 | 6106 | 0.1457 | 0.0902 | 0.0275 |
| 0.1333 | 72.0 | 6192 | 0.1454 | 0.0880 | 0.0278 |
| 0.1355 | 73.0 | 6278 | 0.1407 | 0.0890 | 0.0275 |
| 0.1108 | 74.0 | 6364 | 0.1428 | 0.0928 | 0.0287 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.13.3
|
agni1024/yolos-pocket
|
agni1024
| 2023-12-08T05:22:50Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"yolos",
"object-detection",
"YOLOS",
"Object detection",
"en",
"dataset:detection-datasets/fashionpedia",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-12-06T09:28:48Z |
---
datasets:
- detection-datasets/fashionpedia
language:
- en
pipeline_tag: object-detection
tags:
- YOLOS
- Object detection
---
This is a fine-tunned object detection model for only detecting pocket.
this model supports the following categories:
CATS = ['pocket']
|
afrideva/Echo-3B-GGUF
|
afrideva
| 2023-12-08T05:16:49Z | 7 | 3 | null |
[
"gguf",
"ggml",
"quantized",
"q2_k",
"q3_k_m",
"q4_k_m",
"q5_k_m",
"q6_k",
"q8_0",
"text-generation",
"dataset:pankajmathur/lima_unchained_v1",
"dataset:CheshireAI/guanaco-unchained",
"dataset:totally-not-an-llm/sharegpt-hyperfiltered-3k",
"dataset:totally-not-an-llm/EverythingLM-data-V3",
"dataset:LDJnr/Verified-Camel",
"dataset:CollectiveCognition/chats-data-2023-10-16",
"dataset:Norquinal/claude_multiround_chat_30k",
"dataset:euclaise/WritingPromptsX",
"dataset:euirim/goodwiki",
"dataset:euclaise/MiniCoT",
"dataset:euclaise/SciCoT",
"dataset:euclaise/symtune_mini",
"dataset:euclaise/mathoverflow-accepted",
"dataset:lemonilia/LimaRP",
"base_model:euclaise/Echo-3B",
"base_model:quantized:euclaise/Echo-3B",
"region:us",
"conversational"
] |
text-generation
| 2023-12-08T05:07:04Z |
---
base_model: euclaise/Echo-3B
datasets:
- pankajmathur/lima_unchained_v1
- CheshireAI/guanaco-unchained
- totally-not-an-llm/sharegpt-hyperfiltered-3k
- totally-not-an-llm/EverythingLM-data-V3
- LDJnr/Verified-Camel
- CollectiveCognition/chats-data-2023-10-16
- Norquinal/claude_multiround_chat_30k
- euclaise/WritingPromptsX
- euirim/goodwiki
- euclaise/MiniCoT
- euclaise/SciCoT
- euclaise/symtune_mini
- euclaise/mathoverflow-accepted
- lemonilia/LimaRP
inference: false
model_creator: euclaise
model_name: Echo-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# euclaise/Echo-3B-GGUF
Quantized GGUF model files for [Echo-3B](https://huggingface.co/euclaise/Echo-3B) from [euclaise](https://huggingface.co/euclaise)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [echo-3b.fp16.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.fp16.gguf) | fp16 | 5.59 GB |
| [echo-3b.q2_k.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q2_k.gguf) | q2_k | 1.20 GB |
| [echo-3b.q3_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
| [echo-3b.q4_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
| [echo-3b.q5_k_m.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
| [echo-3b.q6_k.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q6_k.gguf) | q6_k | 2.30 GB |
| [echo-3b.q8_0.gguf](https://huggingface.co/afrideva/Echo-3B-GGUF/resolve/main/echo-3b.q8_0.gguf) | q8_0 | 2.97 GB |
## Original Model Card:
|
amehfooz/lora
|
amehfooz
| 2023-12-08T05:00:55Z | 1 | 0 |
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:amehfooz/RuwaifaRufy",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-08T04:16:16Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: ruwaifarufy
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: false
datasets:
- amehfooz/RuwaifaRufy
---
# LoRA DreamBooth - amehfooz/lora
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
ruwaifarufy
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'amehfooz/lora',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic ruwaifarufy jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
msyed123/ppo-LunarLander-v2
|
msyed123
| 2023-12-08T04:54:43Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T04:54:25Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 180.88 +/- 107.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
brettbbb/mc_cot_128
|
brettbbb
| 2023-12-08T04:40:41Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T04:05:07Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_128
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
0xkhairi/out
|
0xkhairi
| 2023-12-08T04:24:19Z | 3 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T04:00:11Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 150
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1.dev0
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
banghua/openhermes-dpo-ckpt20k
|
banghua
| 2023-12-08T04:17:03Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"region:us"
] | null | 2023-12-08T04:16:45Z |
---
library_name: peft
base_model: teknium/OpenHermes-2.5-Mistral-7B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.1
|
J-YOON/lora-monet-sd1.5
|
J-YOON
| 2023-12-08T04:05:07Z | 16 | 0 |
diffusers
|
[
"diffusers",
"pytorch",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:mit",
"region:us"
] |
text-to-image
| 2023-12-07T07:43:36Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
output:
url: images/001SFEC_TOWER_BRIDGE-200705.jpg
- text: <lora:monet_last:1>, msl monet, dog.
output:
url: images/00032-205941521.png
- text: <lora:monet_last:1>, msl monet, bridge.
output:
url: images/00033-1287362183.png
- text: '-'
output:
url: images/OIP.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: msl monet
license: mit
---
# monet-painting
<Gallery />
## Model description
monet painting style
trigger word : msl monet
## Trigger words
You should use `msl monet` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/J-YOON/lora-monet-sd1.5/tree/main) them in the Files & versions tab.
|
brettbbb/mc_cot_64
|
brettbbb
| 2023-12-08T04:02:28Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T03:44:36Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_64
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_64
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
udemex10/Vojo_AI_LLMforCAD_v1.0
|
udemex10
| 2023-12-08T03:58:57Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"custom_code",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:adapter:tiiuae/falcon-7b-instruct",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2023-12-07T23:50:47Z |
---
library_name: peft
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.7.0
|
soksovann/vongchamnan
|
soksovann
| 2023-12-08T03:58:12Z | 12 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2023-12-08T03:55:09Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
High-fashion photography of ohwx woman in an abandoned industrial warehouse,
with dramatic lighting and edgy outfits.
output:
url: images/photo_2023-12-08_10-53-25.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Chamnan
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/soksovann/vongchamnan/tree/main) them in the Files & versions tab.
|
mjaydenkim/test_trainer
|
mjaydenkim
| 2023-12-08T03:53:26Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:tomh/toxigen_hatebert",
"base_model:finetune:tomh/toxigen_hatebert",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-08T03:27:52Z |
---
base_model: tomh/toxigen_hatebert
tags:
- generated_from_trainer
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [tomh/toxigen_hatebert](https://huggingface.co/tomh/toxigen_hatebert) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.0.post101
- Datasets 2.14.6
- Tokenizers 0.13.3
|
stockmark/bart-base-japanese-news
|
stockmark
| 2023-12-08T03:39:50Z | 81 | 10 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bart",
"text2text-generation",
"ja",
"japanese",
"lm",
"nlp",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-01-20T04:23:07Z |
---
language: ja
tags:
- ja
- japanese
- bart
- lm
- nlp
license: mit
---
# bart-base-japanese-news(base-sized model)
This repository provides a Japanese BART model. The model was trained by [Stockmark Inc.](https://stockmark.co.jp)
An introductory article on the model can be found at the following URL.
[https://tech.stockmark.co.jp/blog/bart-japanese-base-news/](https://tech.stockmark.co.jp/blog/bart-japanese-base-news/)
## Model description
BART is a transformer encoder-decoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
# How to use the model
*NOTE:* Since we are using a custom tokenizer, please use `trust_remote_code=True` to initialize the tokenizer.
## Simple use
```python
from transformers import AutoTokenizer, BartModel
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartModel.from_pretrained(model_name)
inputs = tokenizer("今日は良い天気です。", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
## Sentence Permutation
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
# correct order text is "明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。"
text = "電車は止まる可能性があります。ですから、自宅から働きます。明日は大雨です。"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 明日は大雨です。電車は止まる可能性があります。ですから、自宅から働きます。
```
## Mask filling
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "今日の天気は<mask>のため、傘が必要でしょう。"
inputs = tokenizer([text], max_length=128, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, max_length=128)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 今日の天気は、雨のため、傘が必要でしょう。
```
## Text generation
*NOTE:* You can use the raw model for text generation. However, the model is mostly meant to be fine-tuned on a supervised dataset.
```python
import torch
from transformers import AutoTokenizer, BartForConditionalGeneration
model_name = "stockmark/bart-base-japanese-news"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
model = BartForConditionalGeneration.from_pretrained(model_name)
if torch.cuda.is_available():
model = model.to("cuda")
text = "自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、人工知能と言語学の一分野である。「計算言語学」(computational linguistics)との類似もあるが、自然言語処理は工学的な視点からの言語処理をさすのに対して、計算言語学は言語学的視点を重視する手法をさす事が多い。"
inputs = tokenizer([text], max_length=512, return_tensors="pt", truncation=True)
text_ids = model.generate(inputs["input_ids"].to(model.device), num_beams=3, min_length=0, max_length=40)
output = tokenizer.batch_decode(text_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
print(output)
# sample output: 自然言語処理(しぜんげんごしょり、略称:NLP)は、人間が日常的に使っている自然言語をコンピュータに処理させる一連の技術であり、言語学の一分野である。
```
# Training
The model was trained on Japanese News Articles.
# Tokenization
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer. The vocabulary was first trained on a selected subset from the training data using the official sentencepiece training script.
# Licenses
The pretrained models are distributed under the terms of the [MIT License](https://opensource.org/licenses/mit-license.php).
*NOTE:* Only tokenization_bart_japanese_news.py is [Apache License, Version 2.0](http://www.apache.org/licenses/LICENSE-2.0). Please see tokenization_bart_japanese_news.py for license details.
# Contact
If you have any questions, please contact us using [our contact form](https://stockmark.co.jp/contact).
# Acknowledgement
This comparison study supported with Cloud TPUs from Google’s TensorFlow Research Cloud (TFRC).
|
brettbbb/mc_cot_32
|
brettbbb
| 2023-12-08T03:38:32Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T03:29:33Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_32
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
0xkhairi/wahassen
|
0xkhairi
| 2023-12-08T03:32:11Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"region:us"
] | null | 2023-12-08T01:08:24Z |
---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
model-index:
- name: out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# out
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1.dev0
- Transformers 4.36.0.dev0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
krishnadasar-sudheer-kumar/Q-Taxi-V3
|
krishnadasar-sudheer-kumar
| 2023-12-08T03:29:25Z | 0 | 1 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T03:29:24Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Q-Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.82
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="krishnadasar-sudheer-kumar/Q-Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
brettbbb/mc_cot_16
|
brettbbb
| 2023-12-08T03:22:39Z | 0 | 0 | null |
[
"safetensors",
"generated_from_trainer",
"base_model:lmsys/vicuna-7b-v1.5",
"base_model:finetune:lmsys/vicuna-7b-v1.5",
"license:llama2",
"region:us"
] | null | 2023-12-08T03:07:05Z |
---
license: llama2
base_model: lmsys/vicuna-7b-v1.5
tags:
- generated_from_trainer
model-index:
- name: mc_cot_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mc_cot_16
This model is a fine-tuned version of [lmsys/vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0303
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1538 | 1.0 | 4 | 2.1552 |
| 1.7905 | 2.0 | 8 | 1.9012 |
| 1.4229 | 3.0 | 12 | 1.7686 |
| 1.3834 | 4.0 | 16 | 1.7070 |
| 1.4421 | 5.0 | 20 | 1.7308 |
| 0.9003 | 6.0 | 24 | 1.7646 |
| 0.7013 | 7.0 | 28 | 1.9070 |
| 0.6291 | 8.0 | 32 | 2.0078 |
| 0.3314 | 9.0 | 36 | 2.2682 |
| 0.1554 | 10.0 | 40 | 2.3624 |
| 0.0814 | 11.0 | 44 | 2.6523 |
| 0.0499 | 12.0 | 48 | 2.7565 |
| 0.0216 | 13.0 | 52 | 2.8505 |
| 0.0197 | 14.0 | 56 | 2.9170 |
| 0.0174 | 15.0 | 60 | 2.9433 |
| 0.0174 | 16.0 | 64 | 2.9683 |
| 0.0145 | 17.0 | 68 | 2.9966 |
| 0.013 | 18.0 | 72 | 3.0193 |
| 0.0151 | 19.0 | 76 | 3.0277 |
| 0.0145 | 20.0 | 80 | 3.0303 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.13.1
- Tokenizers 0.14.1
|
oostapeno/rsgd3_full_1B_repl_finegrained_poly_router_dir_none
|
oostapeno
| 2023-12-08T03:14:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-12-05T05:56:03Z |
Number of experts present in the library: 19
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| wiki_hop_original_choose_best_object_interrogative_2_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiki_hop_original_choose_best_object_interrogative_2 | lora |
| adversarial_qa_dbidaf_answer_the_following_q_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_answer_the_following_q | lora |
| social_i_qa_Check_if_a_random_answer_is_valid_or_not_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/social_i_qa_Check_if_a_random_answer_is_valid_or_not | lora |
| ai2_arc_ARC_Challenge_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/ai2_arc_ARC_Challenge_1_0_0 | lora |
| dbpedia_14_given_a_choice_of_categories__v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_choice_of_categories_ | lora |
| wiqa_effect_with_string_answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_effect_with_string_answer | lora |
| quoref_Find_Answer_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quoref_Find_Answer | lora |
| dream_baseline_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dream_baseline | lora |
| wiqa_what_might_be_the_first_step_of_the_process_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/wiqa_what_might_be_the_first_step_of_the_process | lora |
| duorc_SelfRC_answer_question_v3 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_SelfRC_answer_question | lora |
| squad_v1_1_3_0_0_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/squad_v1_1_3_0_0 | lora |
| quartz_read_passage_below_choose_v1 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/quartz_read_passage_below_choose | lora |
| dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/dbpedia_14_given_a_list_of_category_what_does_the_title_belong_to | lora |
| web_questions_whats_the_answer_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/web_questions_whats_the_answer | lora |
| duorc_ParaphraseRC_answer_question_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_answer_question | lora |
| duorc_ParaphraseRC_title_generation_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/duorc_ParaphraseRC_title_generation | lora |
| adversarial_qa_dbidaf_generate_question_v4 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/adversarial_qa_dbidaf_generate_question | lora |
| yelp_polarity_reviews_0_2_0_v5 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/yelp_polarity_reviews_0_2_0 | lora |
| cos_e_v2 | EleutherAI/gpt-neo-1.3B | sordonia/flan-10k-flat/cos_e_v1_11_question_description_option_text | lora |
Last updated on: 2023-12-08 03:14:42+00:00
|
prajapatisarvesh/Reinforce-1
|
prajapatisarvesh
| 2023-12-08T03:13:44Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-12-08T03:13:35Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Delcos/airoboros-mistral2.2-Starling-11b
|
Delcos
| 2023-12-08T02:55:53Z | 17 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"starling",
"llama-2",
"en",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-12-07T06:07:05Z |
---
license: cc-by-nc-nd-4.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- starling
- mistral
- llama-2
---
<h1 style="font-size: 36px;">PENDING</h1>
## Description:
**PENDING**
**Uncensored**
## Main Goals:
## Prompt Template:
**For optimal interaction, use this template:**
```
### Instruction:
### Response:
```
# Recommended Settings:
**Defaults:**
```
temperature: 0.77
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
```
**Better context but a little more repetitive in some cases:**
```
temperature: 0.8
top_p: 0.85
top_k: 20
repetition_penalty: 1.2
guidance_scale: 1.25
```
# Benchmarks:
PENDING
# Training Data:
PENDING
|
seetvn/my_awesome_model
|
seetvn
| 2023-12-08T02:45:56Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-12-05T21:59:18Z |
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2751
- Accuracy: 0.8912
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.3110 | 0.8788 |
| No log | 2.0 | 300 | 0.2751 | 0.8912 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
mitchyAI/hyewonmchy
|
mitchyAI
| 2023-12-08T02:30:36Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-12-08T02:29:21Z |
---
license: creativeml-openrail-m
---
|
vkorotchenko/llama-2-7b-fine-tuned-for-cdt-extraction-2-short-prompt
|
vkorotchenko
| 2023-12-08T02:14:02Z | 1 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | 2023-12-08T02:13:55Z |
---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1.dev0
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.