modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-24 06:28:30
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 573
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-24 06:27:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
wdika/MTL_IDSLRUNet_SKMTEA_poisson2d_4x
|
wdika
| 2024-03-06T10:55:06Z | 0 | 0 |
atommic
|
[
"atommic",
"multitask-image-reconstruction-image-segmentation",
"IDSLRUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:43:33Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- multitask-image-reconstruction-image-segmentation
- IDSLRUNet
- ATOMMIC
- pytorch
model-index:
- name: MTL_IDSLRUNet_SKMTEA_poisson2d_4x
results: []
---
## Model Overview
Image domain Deep Structured Low-Rank network using a UNet (IDSLRUNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_IDSLRUNet_SKMTEA_poisson2d_4x/blob/main/MTL_IDSLRUNet_SKMTEA_poisson2d_4x.atommic
mode: test
```
### Usage
You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: IDSLRUNET
use_reconstruction_module: true
input_channels: 64 # coils * 2
reconstruction_module_output_channels: 64 # coils * 2
segmentation_module_output_channels: 4
channels: 64
num_pools: 2
padding_size: 11
drop_prob: 0.0
normalize: false
padding: true
norm_groups: 2
num_iters: 5
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
4x: MSE = 0.001139 +/- 0.002133 NMSE = 0.02356 +/- 0.05514 PSNR = 30.53 +/- 5.593 SSIM = 0.8423 +/- 0.1057 DICE = 0.8943 +/- 0.127 F1 = 0.2558 +/- 0.2213 HD95 = 4.927 +/- 2.812 IOU = 0.2985 +/- 0.3086
## Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
wdika/MTL_SegNet_SKMTEA_poisson2d_4x
|
wdika
| 2024-03-06T10:54:52Z | 0 | 0 |
atommic
|
[
"atommic",
"multitask-image-reconstruction-image-segmentation",
"SegNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:44:04Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- multitask-image-reconstruction-image-segmentation
- SegNet
- ATOMMIC
- pytorch
model-index:
- name: MTL_SegNet_SKMTEA_poisson2d_4x
results: []
---
## Model Overview
Segmentation Network MRI (SegNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/MTL_SegNet_SKMTEA_poisson2d_4x/blob/main/MTL_SegNet_SKMTEA_poisson2d_4x.atommic
mode: test
```
### Usage
You need to download the SKMTEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/MTL/rs/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGNET
use_reconstruction_module: true
input_channels: 64 # coils * 2
reconstruction_module_output_channels: 64 # coils * 2
segmentation_module_output_channels: 4
channels: 64
num_pools: 2
padding_size: 11
drop_prob: 0.0
normalize: true
padding: true
norm_groups: 2
num_cascades: 5
segmentation_final_layer_conv_dim: 2
segmentation_final_layer_kernel_size: 3
segmentation_final_layer_dilation: 1
segmentation_final_layer_bias: False
segmentation_final_layer_nonlinear: relu
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
reconstruction_loss:
l1: 1.0
kspace_reconstruction_loss: false
total_reconstruction_loss_weight: 0.5
total_segmentation_loss_weight: 0.5
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/MTL/rs/SKMTEA/conf/targets) configuration files.
Evaluation can be performed using the reconstruction [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) and [segmentation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) scripts for the reconstruction and the segmentation tasks, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
4x: MSE = 0.001247 +/- 0.002092 NMSE = 0.02623 +/- 0.05875 PSNR = 29.95 +/- 5.115 SSIM = 0.8396 +/- 0.1071 DICE = 0.9154 +/- 0.1138 F1 = 0.2703 +/- 0.2842 HD95 = 3.002 +/- 1.449 IOU = 0.2904 +/- 0.3491
## Limitations
This model was trained on the SKM-TEA dataset for 4x accelerated MRI reconstruction and MRI segmentation with MultiTask Learning (MTL) of the axial plane.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
wdika/QMRI_qVarNet_AHEAD_gaussian2d_12x
|
wdika
| 2024-03-06T10:54:43Z | 0 | 0 |
atommic
|
[
"atommic",
"quantitative-mri-mapping",
"qVarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:AHEAD",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:44:40Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- AHEAD
thumbnail: null
tags:
- quantitative-mri-mapping
- qVarNet
- ATOMMIC
- pytorch
model-index:
- name: QMRI_qVarNet_AHEAD_gaussian2d_12x
results: []
---
## Model Overview
quantitative Variational Network (qVarNet) for 12x accelerated quantitative MRI mapping of R2*, S0, B0, phi maps on the AHEAD dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/qMRI/AHEAD/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/QMRI_qVarNet_AHEAD_gaussian2d_12x/blob/main/QMRI_qVarNet_AHEAD_gaussian2d_12x.atommic
mode: test
```
### Usage
You need to download the AHEAD dataset to effectively use this model. Check the [AHEAD](https://github.com/wdika/atommic/blob/main/projects/qMRI/AHEAD/README.md) page for more information.
## Model Architecture
```base
model:
model_name: qVN
use_reconstruction_module: false
quantitative_module_num_cascades: 8
quantitative_module_channels: 18
quantitative_module_pooling_layers: 4
quantitative_module_in_channels: 8
quantitative_module_out_channels: 8
quantitative_module_padding_size: 11
quantitative_module_normalize: true
quantitative_module_no_dc: false
quantitative_module_signal_forward_model_sequence: MEGRE
quantitative_module_dimensionality: 2
quantitative_maps_scaling_factor: 1e-3
quantitative_maps_regularization_factors:
- 150.0
- 150.0
- 1000.0
- 150.0
quantitative_loss:
ssim: 1.0
kspace_quantitative_loss: false
total_quantitative_loss_weight: 1.0 # balance between reconstruction and quantitative loss
quantitative_parameters_regularization_factors:
- R2star: 1.0
- S0: 1.0
- B0: 1.0
- phi: 1.0
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/qMRI/AHEAD/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/qmapping.py) script for the qmri task, with --evaluation_type per_slice.
Results
-------
Evaluation against R2*, S0, B0, phi targets
-------------------------------------------
12x: MSE = 0.005571 +/- 0.02725 NMSE = 0.192 +/- 0.3344 PSNR = 24.36 +/- 7.791 SSIM = 0.7838 +/- 0.2059
## Limitations
This model was trained on very few subjects on the AHEAD dataset. It is not guaranteed to generalize to other datasets.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Alkemade A, Mulder MJ, Groot JM, et al. The Amsterdam Ultra-high field adult lifespan database (AHEAD): A freely available multimodal 7 Tesla submillimeter magnetic resonance imaging database. NeuroImage 2020;221.
|
wdika/REC_VarNet_AHEAD_gaussian2d_12x
|
wdika
| 2024-03-06T10:54:34Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"VarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:AHEAD",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:45:17Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- AHEAD
thumbnail: null
tags:
- image-reconstruction
- VarNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VarNet_AHEAD_gaussian2d_12x
results: []
---
## Model Overview
Variational Network (VarNet) for 12x accelerated MRI Reconstruction on the AHEAD dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/AHEAD/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VarNet_AHEAD_gaussian2d_12x/blob/main/REC_VarNet_AHEAD_gaussian2d_12x.atommic
mode: test
```
### Usage
You need to download the AHEAD dataset to effectively use this model. Check the [AHEAD](https://github.com/wdika/atommic/blob/main/projects/REC/AHEAD/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VN
num_cascades: 8
channels: 18
pooling_layers: 4
padding_size: 11
normalize: true
no_dc: false
dimensionality: 2
num_echoes: 4
reconstruction_loss:
ssim: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: PolynomialHoldDecayAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/AHEAD/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.0006782 +/- 0.0008642 NMSE = 0.04725 +/- 0.06244 PSNR = 32.39 +/- 4.795 SSIM = 0.8935 +/- 0.05252
## Limitations
This model was trained on very few subjects on the AHEAD dataset. It is not guaranteed to generalize to other datasets.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Alkemade A, Mulder MJ, Groot JM, et al. The Amsterdam Ultra-high field adult lifespan database (AHEAD): A freely available multimodal 7 Tesla submillimeter magnetic resonance imaging database. NeuroImage 2020;221.
|
wdika/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:54:19Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:46:24Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_JointICNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
sens_unet_num_filters: 16
sens_unet_num_pool_layers: 2
sens_unet_dropout_probability: 0.0
sens_unet_padding_size: 11
sens_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001306 +/- 0.001178 NMSE = 0.02018 +/- 0.02082 PSNR = 29.28 +/- 3.99 SSIM = 0.8719 +/- 0.06531
10x: MSE = 0.002043 +/- 0.001908 NMSE = 0.03181 +/- 0.03297 PSNR = 27.36 +/- 4.101 SSIM = 0.8278 +/- 0.0864
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
deepnet/SN6-30M1New
|
deepnet
| 2024-03-06T10:54:12Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T10:50:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wdika/REC_LPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:54:10Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"LPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:46:54Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- LPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_LPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Learned Primal Dual Network (LPDNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_LPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_LPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: LPDNet
num_primal: 5
num_dual: 5
num_iter: 5
primal_model_architecture: UNET
primal_in_channels: 2
primal_out_channels: 2
primal_unet_num_filters: 16
primal_unet_num_pool_layers: 2
primal_unet_dropout_probability: 0.0
primal_unet_padding_size: 11
primal_unet_normalize: true
dual_model_architecture: UNET
dual_in_channels: 2
dual_out_channels: 2
dual_unet_num_filters: 16
dual_unet_num_pool_layers: 2
dual_unet_dropout_probability: 0.0
dual_unet_padding_size: 11
dual_unet_normalize: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001668 +/- 0.001584 NMSE = 0.02567 +/- 0.0265 PSNR = 28.26 +/- 4.222 SSIM = 0.8493 +/- 0.07524
10x: MSE = 0.002367 +/- 0.002247 NMSE = 0.03687 +/- 0.03859 PSNR = 26.73 +/- 4.229 SSIM = 0.8096 +/- 0.09866
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_RVN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:54:03Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"RVN",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:47:25Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- RVN
- ATOMMIC
- pytorch
model-index:
- name: REC_RVN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Recurrent Variational Network (RVN) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RVN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_RVN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: RVN
in_channels: 2
recurrent_hidden_channels: 64
recurrent_num_layers: 4
num_steps: 8
no_parameter_sharing: true
learned_initializer: true
initializer_initialization: "sense"
initializer_channels:
- 32
- 32
- 64
- 64
initializer_dilations:
- 1
- 1
- 2
- 4
initializer_multiscale: 1
accumulate_predictions: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001627 +/- 0.001304 NMSE = 0.02511 +/- 0.02188 PSNR = 28.14 +/- 3.531 SSIM = 0.8449 +/- 0.06722
10x: MSE = 0.002677 +/- 0.00225 NMSE = 0.0416 +/- 0.03916 PSNR = 26.03 +/- 3.767 SSIM = 0.787 +/- 0.09309
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:53:48Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"VSNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:42Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- VSNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Variable-Splitting Net (VSNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_VSNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VSNet
num_cascades: 10
imspace_model_architecture: CONV
imspace_in_channels: 2
imspace_out_channels: 2
imspace_conv_hidden_channels: 64
imspace_conv_n_convs: 4
imspace_conv_batchnorm: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.003044 +/- 0.002908 NMSE = 0.04603 +/- 0.04055 PSNR = 25.51 +/- 3.913 SSIM = 0.788 +/- 0.0789
10x: MSE = 0.00402 +/- 0.003273 NMSE = 0.06327 +/- 0.06061 PSNR = 24.19 +/- 3.266 SSIM = 0.74 +/- 0.08881
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:52:53Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"KIKINet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:17Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- KIKINet
- ATOMMIC
- pytorch
model-index:
- name: REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
KIKINet for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_KIKINet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: KIKINet
num_iter: 2
kspace_model_architecture: UNET
kspace_in_channels: 2
kspace_out_channels: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_model_architecture: UNET
imspace_in_channels: 2
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.00109 +/- 0.003836 NMSE = 0.02942 +/- 0.08896 PSNR = 31.02 +/- 5.678 SSIM = 0.8556 +/- 0.2009
8x: MSE = 0.002183 +/- 0.005025 NMSE = 0.05946 +/- 0.1484 PSNR = 27.78 +/- 5.821 SSIM = 0.8049 +/- 0.2074
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
onurSakar/llama2-qlora-finetunined-GYM-Assistant
|
onurSakar
| 2024-03-06T10:51:58Z | 0 | 0 | null |
[
"safetensors",
"text-generation",
"en",
"dataset:onurSakar/GYM-Exercise",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2024-02-27T16:59:37Z |
---
license: apache-2.0
datasets:
- onurSakar/GYM-Exercise
language:
- en
pipeline_tag: text-generation
---
# Welcome to Llama2, your AI-powered gym instructor designed to revolutionize your fitness journey
With Llama2, embark on a personalized workout experience designed to meet your unique needs and goals. Say goodbye to generic fitness routines and hello to targeted guidance that evolves with you over time. Whether you're a beginner or a seasoned athlete, Llama2 adapts to your pace, ensuring optimal progress and results. Get ready to redefine your fitness routine with Llama2 by your side, available whenever and wherever you need it.
<b>note = this training is given on colab. for this reason the training is not very long and the parameters are given very little.</b>
<img src="https://raw.githubusercontent.com/OnurSakarr/LLM/master/gym_llm/Image/image%20(75).png" />
## Model
meta-llama/Llama-2-7b <br>
per_device_train_batch_size = 1 <br>
gradient_accumulation_steps = 1 <br>
optim = "paged_adamw_32bit" <br>
save_steps = 500 <br>
logging_steps = 10 <br>
learning_rate = 2e-4 <br>
max_grad_norm = 0.3 <br>
max_steps = 1500 <br>
warmup_ratio = 0.03 <br>
lora_alpha = 16 <br>
lora_dropout = 0.1 <br>
lora_r = 8 <br>
## Data
token size = 214464 <br>
row = 1660 <br>
word count = 130755 <br>
prompt= \<s\>\[INST\] \<\<SYS\>\> Below is an instruction that describes a task. Write a response that appropriately completes the request \<\<\/SYS\>\> ("title") \[\/INST\] ("context")\</s\>
## Question and Answer
1. **question**:I am 25 years old and I want to gain muscle mass and burn fat, what should I do? <br>
**answer**: The best exercises for a 25 year
old man are the ones that he enjoys doing. If he is a gym rat, he
should continue going to the gym. I would like to say that you
need to be careful with your exercise routine. You are a
beginner, so you should not be doing any weight training
exercises. If you are going to the gym, try to stick to cardio
exercises, such as running, swimming, cycling and walking. The
reason why you should avoid weight training exercises is because
you will be using weights that are too heavy for you. This is not
good for you. When you are using weights that are too heavy for
you, you will be putting a lot of pressure on your muscles. This
is not good for you because you will be damaging your muscles.
This will cause you to get injured.
2. **question**:Can you recommend effective ab exercises?<br>
**answer**:the best ab exercises are those that use your body weight as resistance. This means you don’t need any special equipment to get a great workout. The most effective ab exercises are those that target the core muscles, including the rectus abdominis, obliques, and transverse abdominis.
What are the best exercises for abs?
The best exercises for abs are crunches, planks, and side planks.
What are the best exercises for abs? The best exercises for abs are crunches, planks, and side planks.
3. **question**:Can you write a 5x5 exercise programme? <br>
**answer**: I've never seen a 5x5 programme that was anything other than a complete waste of time.
If you can't write a programme for 5x5, then you don't know what you're doing.
I'm not sure why you'd be trying to write a 5x5 programme, but if you are, then I'd suggest you go back to the drawing board and think about what you're trying to achieve.
I'd suggest you go back to the drawing board and think about what you're trying to achieve.
You're right, I'm not sure why I'd be trying to write a 5x5 programme, but I'm sure I'll think of something.
4. **question**:How can I improve flexibility? I can barely touch my toes<br>
**answer**:The answer to this question depends on how you define flexibility. Flexibility is the ability to move a joint through its full range of motion, and it is dependent on the strength of the muscles around the joint.
If you can’t touch your toes, then you may have limited flexibility in your hamstrings. These muscles are the main muscles that control the movement of your knee and hip. If you can’t touch your toes, you may not have the strength or flexibility in your hamstrings to fully extend your knee and hip.
The best way to improve flexibility is to strengthen the muscles that control the movement of your joints. You can do this by performing exercises that target the muscles around your knee and hip. These exercises include lunges, squats, and deadlifts.
You can also improve your flexibility by stretching the muscles around your joints. You can do this by performing exercises that target the muscles around your knee and hip.
|
wdika/REC_CCNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:51:05Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"CCNN",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:45:38Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- CCNN
- ATOMMIC
- pytorch
model-index:
- name: REC_CCNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Deep Cascade of Convolutional Neural Networks (CCNN) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CCNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_CCNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CascadeNet
num_cascades: 10
hidden_channels: 64
n_convs: 5
batchnorm: false
no_dc: false
accumulate_predictions: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.00156 +/- 0.001352 NMSE = 0.02397 +/- 0.02192 PSNR = 28.36 +/- 3.693 SSIM = 0.8453 +/- 0.06374
10x: MSE = 0.002715 +/- 0.002301 NMSE = 0.04242 +/- 0.04049 PSNR = 25.95 +/- 3.644 SSIM = 0.7831 +/- 0.08874
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:50:57Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"CRNN",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:46:08Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- CRNN
- ATOMMIC
- pytorch
model-index:
- name: REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Convolutional Recurrent Neural Network (CRNN) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_CRNN_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CRNNet
num_iterations: 10
hidden_channels: 64
n_convs: 3
batchnorm: false
no_dc: false
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.003055 +/- 0.003168 NMSE = 0.04653 +/- 0.04576 PSNR = 25.59 +/- 4.19 SSIM = 0.7745 +/- 0.08766
10x: MSE = 0.003803 +/- 0.003232 NMSE = 0.05914 +/- 0.05166 PSNR = 24.48 +/- 3.389 SSIM = 0.7216 +/- 0.08847
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:50:32Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"RIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:47:41Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- RIM
- ATOMMIC
- pytorch
model-index:
- name: REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Recurrent Inference Machines (RIM) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_RIM_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: GRU
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 1
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.002022 +/- 0.002006 NMSE = 0.03154 +/- 0.03684 PSNR = 27.45 +/- 4.32 SSIM = 0.8336 +/- 0.07706
10x: MSE = 0.003063 +/- 0.002883 NMSE = 0.04949 +/- 0.06093 PSNR = 25.56 +/- 3.963 SSIM = 0.7881 +/- 0.09099
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:50:24Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"VarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:20Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- VarNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
Variational Network (VarNet) for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_VarNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VN
num_cascades: 8
channels: 18
pooling_layers: 4
padding_size: 11
normalize: true
no_dc: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.001211 +/- 0.001067 NMSE = 0.01883 +/- 0.01921 PSNR = 29.49 +/- 3.86 SSIM = 0.8735 +/- 0.06084
10x: MSE = 0.001929 +/- 0.001773 NMSE = 0.03006 +/- 0.03146 PSNR = 27.51 +/- 4.008 SSIM = 0.8269 +/- 0.08687
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
|
wdika
| 2024-03-06T10:50:14Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"XPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:CC359",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:48:58Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- CC359
thumbnail: null
tags:
- image-reconstruction
- XPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM
results: []
---
## Model Overview
XPDNet for 5x & 10x accelerated MRI Reconstruction on the CC359 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM/blob/main/REC_XPDNet_CC359_12_channel_poisson2d_5x_10x_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the CC359 dataset to effectively use this model. Check the [CC359](https://github.com/wdika/atommic/blob/main/projects/REC/CC359/README.md) page for more information.
## Model Architecture
```base
model:
model_name: XPDNet
num_primal: 5
num_dual: 1
num_iter: 10
use_primal_only: true
kspace_model_architecture: CONV
kspace_in_channels: 2
kspace_out_channels: 2
dual_conv_hidden_channels: 16
dual_conv_num_dubs: 2
dual_conv_batchnorm: false
image_model_architecture: MWCNN
imspace_in_channels: 2
imspace_out_channels: 2
mwcnn_hidden_channels: 16
mwcnn_num_scales: 0
mwcnn_bias: true
mwcnn_batchnorm: false
normalize_image: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/CC359/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
5x: MSE = 0.004192 +/- 0.004255 NMSE = 0.06401 +/- 0.06475 PSNR = 24.27 +/- 4.135 SSIM = 0.7609 +/- 0.09962
10x: MSE = 0.00581 +/- 0.00445 NMSE = 0.08987 +/- 0.07376 PSNR = 22.65 +/- 3.225 SSIM = 0.6997 +/- 0.1119
## Limitations
This model was trained on the CC359 using a UNet coil sensitivity maps estimation and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Beauferris, Y., Teuwen, J., Karkalousos, D., Moriakov, N., Caan, M., Yiasemis, G., Rodrigues, L., Lopes, A., Pedrini, H., Rittner, L., Dannecker, M., Studenyak, V., Gröger, F., Vyas, D., Faghih-Roohi, S., Kumar Jethi, A., Chandra Raju, J., Sivaprakasam, M., Lasby, M., … Souza, R. (2022). Multi-Coil MRI Reconstruction Challenge—Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Frontiers in Neuroscience, 16. https://doi.org/10.3389/fnins.2022.919186
|
wdika/REC_CIRIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:50:04Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"CIRIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:49:33Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- CIRIM
- ATOMMIC
- pytorch
model-index:
- name: REC_CIRIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Cascades of Independently Recurrent Inference Machines (CIRIM) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CIRIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_CIRIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: IndRNN
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 5
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.0006283 +/- 0.002808 NMSE = 0.01679 +/- 0.05832 PSNR = 33.83 +/- 6.113 SSIM = 0.8916 +/- 0.1844
8x: MSE = 0.00126 +/- 0.003477 NMSE = 0.0328 +/- 0.07764 PSNR = 30.23 +/- 5.665 SSIM = 0.8464 +/- 0.2017
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:49:56Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:02Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and Coil Sensitivity Reconstruction Network (JointICNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_JointICNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
sens_unet_num_filters: 16
sens_unet_num_pool_layers: 2
sens_unet_dropout_probability: 0.0
sens_unet_padding_size: 11
sens_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.001774 +/- 0.004331 NMSE = 0.04376 +/- 0.08693 PSNR = 28.57 +/- 5.497 SSIM = 0.8318 +/- 0.1976
8x: MSE = 0.003421 +/- 0.005284 NMSE = 0.08763 +/- 0.1835 PSNR = 25.5 +/- 5.384 SSIM = 0.7719 +/- 0.2019
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
wdika/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:49:46Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"LPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:32Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- LPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Learned Primal Dual Network (LPDNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: LPDNet
num_primal: 5
num_dual: 5
num_iter: 5
primal_model_architecture: UNET
primal_in_channels: 2
primal_out_channels: 2
primal_unet_num_filters: 16
primal_unet_num_pool_layers: 2
primal_unet_dropout_probability: 0.0
primal_unet_padding_size: 11
primal_unet_normalize: true
dual_model_architecture: UNET
dual_in_channels: 2
dual_out_channels: 2
dual_unet_num_filters: 16
dual_unet_num_pool_layers: 2
dual_unet_dropout_probability: 0.0
dual_unet_padding_size: 11
dual_unet_normalize: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.000939 +/- 0.004162 NMSE = 0.02527 +/- 0.09819 PSNR = 32.6 +/- 6.781 SSIM = 0.8815 +/- 0.2009
8x: MSE = 0.001548 +/- 0.00446 NMSE = 0.04132 +/- 0.1069 PSNR = 29.51 +/- 5.934 SSIM = 0.8401 +/- 0.2084
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
wdika/REC_XPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:47:44Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"XPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:52:28Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- XPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_XPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
XPDNet for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_XPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_XPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: XPDNet
num_primal: 5
num_dual: 1
num_iter: 10
use_primal_only: true
kspace_model_architecture: CONV
kspace_in_channels: 2
kspace_out_channels: 2
dual_conv_hidden_channels: 16
dual_conv_num_dubs: 2
dual_conv_batchnorm: false
image_model_architecture: MWCNN
imspace_in_channels: 2
imspace_out_channels: 2
mwcnn_hidden_channels: 16
mwcnn_num_scales: 0
mwcnn_bias: true
mwcnn_batchnorm: false
normalize_image: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.001292 +/- 0.006735 NMSE = 0.03317 +/- 0.1122 PSNR = 31.03 +/- 6.749 SSIM = 0.8543 +/- 0.2115
8x: MSE = 0.002671 +/- 0.00606 NMSE = 0.07137 +/- 0.1499 PSNR = 26.96 +/- 6.179 SSIM = 0.7881 +/- 0.2177
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
Shvepi/gpt4all-j-lora-scs-v2.0
|
Shvepi
| 2024-03-06T10:47:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T10:47:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wdika/REC_KIKINet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:47:25Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"KIKINet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:53:27Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- KIKINet
- ATOMMIC
- pytorch
model-index:
- name: REC_KIKINet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
KIKINet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_KIKINet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_KIKINet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: KIKINet
num_iter: 2
kspace_model_architecture: UNET
kspace_in_channels: 2
kspace_out_channels: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_model_architecture: UNET
imspace_in_channels: 2
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.0025 +/- 0.00671 NMSE = 0.1052 +/- 0.1784 PSNR = 27.33 +/- 5.552 SSIM = 0.6587 +/- 0.2413
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:46:41Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"XPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:55:37Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- XPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
XPDNet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: XPDNet
num_primal: 5
num_dual: 1
num_iter: 10
use_primal_only: true
kspace_model_architecture: CONV
kspace_in_channels: 2
kspace_out_channels: 2
dual_conv_hidden_channels: 16
dual_conv_num_dubs: 2
dual_conv_batchnorm: false
image_model_architecture: MWCNN
imspace_in_channels: 2
imspace_out_channels: 2
mwcnn_hidden_channels: 16
mwcnn_num_scales: 0
mwcnn_bias: true
mwcnn_batchnorm: false
normalize_image: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.002691 +/- 0.008089 NMSE = 0.1117 +/- 0.1955 PSNR = 27.18 +/- 5.768 SSIM = 0.6544 +/- 0.2702
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_RVN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
|
wdika
| 2024-03-06T10:46:07Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"RVN",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:51:02Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- RVN
- ATOMMIC
- pytorch
model-index:
- name: REC_RVN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Recurrent Variational Network (RVN) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RVN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_RVN_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: RVN
in_channels: 2
recurrent_hidden_channels: 64
recurrent_num_layers: 4
num_steps: 8
no_parameter_sharing: true
learned_initializer: true
initializer_initialization: "sense"
initializer_channels:
- 32
- 32
- 64
- 64
initializer_dilations:
- 1
- 1
- 2
- 4
initializer_multiscale: 1
accumulate_predictions: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.0005725 +/- 0.002891 NMSE = 0.0151 +/- 0.05895 PSNR = 34.23 +/- 5.967 SSIM = 0.8935 +/- 0.1797
8x: MSE = 0.001295 +/- 0.003645 NMSE = 0.0334 +/- 0.08907 PSNR = 30.08 +/- 5.678 SSIM = 0.8434 +/- 0.1949
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775.
|
wdika/REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:44:32Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"JointICNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:53:12Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- JointICNet
- ATOMMIC
- pytorch
model-index:
- name: REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Joint Deep Model-Based MR Image and coil sensitivity Reconstruction Network (JointICNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_JointICNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: JointICNet
num_iter: 2
kspace_unet_num_filters: 16
kspace_unet_num_pool_layers: 2
kspace_unet_dropout_probability: 0.0
kspace_unet_padding_size: 11
kspace_unet_normalize: true
imspace_unet_num_filters: 16
imspace_unet_num_pool_layers: 2
imspace_unet_dropout_probability: 0.0
imspace_unet_padding_size: 11
imspace_unet_normalize: true
SENSE_unet_num_filters: 16
SENSE_unet_num_pool_layers: 2
SENSE_unet_dropout_probability: 0.0
SENSE_unet_padding_size: 11
SENSE_unet_normalize: true
dimensionality: 2
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001713 +/- 0.006045 NMSE = 0.06275 +/- 0.127 PSNR = 29.52 +/- 6.327 SSIM = 0.7274 +/- 0.2913
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_LPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:44:20Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"LPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:53:41Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- LPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_LPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Learned Primal Dual Network (LPDNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_LPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_LPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: LPDNet
num_primal: 5
num_dual: 5
num_iter: 5
primal_model_architecture: UNET
primal_in_channels: 2
primal_out_channels: 2
primal_unet_num_filters: 16
primal_unet_num_pool_layers: 2
primal_unet_dropout_probability: 0.0
primal_unet_padding_size: 11
primal_unet_normalize: true
dual_model_architecture: UNET
dual_in_channels: 2
dual_out_channels: 2
dual_unet_num_filters: 16
dual_unet_num_pool_layers: 2
dual_unet_dropout_probability: 0.0
dual_unet_padding_size: 11
dual_unet_normalize: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.0016 +/- 0.005712 NMSE = 0.05706 +/- 0.09372 PSNR = 29.79 +/- 6.276 SSIM = 0.7366 +/- 0.297
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_RVN_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:44:09Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"RVN",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:54:11Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- RVN
- ATOMMIC
- pytorch
model-index:
- name: REC_RVN_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Recurrent Variational Network (RVN) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RVN_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_RVN_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: RVN
in_channels: 2
recurrent_hidden_channels: 64
recurrent_num_layers: 4
num_steps: 8
no_parameter_sharing: true
learned_initializer: true
initializer_initialization: "SENSEe"
initializer_channels:
- 32
- 32
- 64
- 64
initializer_dilations:
- 1
- 1
- 2
- 4
initializer_multiscale: 1
accumulate_predictions: false
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001201 +/- 0.005875 NMSE = 0.04067 +/- 0.1203 PSNR = 31.96 +/- 6.899 SSIM = 0.7781 +/- 0.3002
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:43:57Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:54:40Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- UNet
- ATOMMIC
- pytorch
model-index:
- name: REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
UNet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: UNet
channels: 64
pooling_layers: 4
in_channels: 2
out_channels: 2
padding_size: 11
dropout: 0.0
normalize: true
norm_groups: 2
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001251 +/- 0.005686 NMSE = 0.04254 +/- 0.09148 PSNR = 31.4 +/- 6.554 SSIM = 0.7705 +/- 0.2946
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
|
wdika
| 2024-03-06T10:43:36Z | 0 | 0 |
atommic
|
[
"atommic",
"image-reconstruction",
"VSNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:55:23Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- VSNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Variable-Splitting Net (VSNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_VSNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VSNet
num_cascades: 10
imspace_model_architecture: CONV
imspace_in_channels: 2
imspace_out_channels: 2
imspace_conv_hidden_channels: 64
imspace_conv_n_convs: 4
imspace_conv_batchnorm: false
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001976 +/- 0.005902 NMSE = 0.07433 +/- 0.1106 PSNR = 28.51 +/- 5.793 SSIM = 0.7084 +/- 0.289
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1
|
wdika/SEG_UNet_ISLES2022SubAcuteStroke
|
wdika
| 2024-03-06T10:42:17Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T17:58:40Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_ISLES2022SubAcuteStroke/blob/main/SEG_UNet_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 0.9088 +/- 3.953 AVD = 0.5439 +/- 3.921 DICE = 0.6946 +/- 0.5589 L-F1 = 0.7859 +/- 0.5848
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9
|
wdika/SEG_VNet_ISLES2022SubAcuteStroke
|
wdika
| 2024-03-06T10:42:03Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"VNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T17:59:41Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- VNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_VNet_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_VNet_ISLES2022SubAcuteStroke/blob/main/SEG_VNet_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONVNET
segmentation_module: VNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_activation: elu
segmentation_module_dropout: 0.0
segmentation_module_bias: False
segmentation_module_padding_size: 15
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 2.281 +/- 10.72 AVD = 3.257 +/- 27.43 DICE = 0.4903 +/- 0.694 L-F1 = 0.5998 +/- 0.6866
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9
|
wdika/SEG_DynUNet_SKMTEA
|
wdika
| 2024-03-06T10:41:53Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"DynUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T18:00:27Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- DynUNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_DynUNet_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_DynUNet_SKMTEA/blob/main/SEG_DynUNet_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONDYNUNET
segmentation_module: DYNUNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels:
- 32
- 64
- 128
- 256
- 512
segmentation_module_kernel_size:
- 3
- 3
- 3
- 3
- 1
segmentation_module_strides:
- 1
- 1
- 1
- 1
- 1
segmentation_module_dropout: 0.0
segmentation_module_norm: instance
segmentation_module_activation: leakyrelu
segmentation_module_deep_supervision: true
segmentation_module_deep_supervision_levels: 2
segmentation_module_normalize: false
segmentation_module_norm_groups: 2
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.6888 +/- 0.1359 F1 = 0.05911 +/- 0.2638 HD95 = 8.973 +/- 4.507 IOU = 0.01517 +/- 0.06638
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
satpalsr/gemma-sft-qlora
|
satpalsr
| 2024-03-06T10:41:47Z | 3 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:satpalsr/hindi-sample",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-06T09:21:49Z |
---
license: other
library_name: peft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
base_model: google/gemma-7b
datasets:
- satpalsr/hindi-sample
model-index:
- name: gemma-sft-qlora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-sft-qlora
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the satpalsr/hindi-sample dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1537 | 0.99 | 94 | 1.0988 |
| 0.9028 | 1.99 | 189 | 0.8056 |
| 0.6553 | 2.99 | 284 | 0.6577 |
| 0.4936 | 3.96 | 376 | 0.6385 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.38.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
|
wdika/SEG_UNet3D_SKMTEA
|
wdika
| 2024-03-06T10:41:37Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"UNet3D",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T18:01:04Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- UNet3D
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet3D_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet3D_SKMTEA/blob/main/SEG_UNet3D_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATION3DUNET
segmentation_module: UNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9175 +/- 0.06793 F1 = 0.7889 +/- 0.404 HD95 = 5.893 +/- 2.995 IOU = 0.5301 +/- 0.347
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
wdika/SEG_DynUNet_ISLES2022SubAcuteStroke
|
wdika
| 2024-03-06T10:41:26Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"DynUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T17:58:24Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- DynUNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_DynUNet_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_DynUNet_ISLES2022SubAcuteStroke/blob/main/SEG_DynUNet_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONDYNUNET
segmentation_module: DYNUNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_channels:
- 32
- 64
- 128
- 256
- 512
segmentation_module_kernel_size:
- 3
- 3
- 3
- 3
- 1
segmentation_module_strides:
- 1
- 1
- 1
- 1
- 1
segmentation_module_dropout: 0.0
segmentation_module_norm: instance
segmentation_module_activation: leakyrelu
segmentation_module_deep_supervision: true
segmentation_module_deep_supervision_levels: 2
segmentation_module_normalize: false
segmentation_module_norm_groups: 2
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 0.7519 +/- 2.23 AVD = 0.5855 +/- 3.874 DICE = 0.7289 +/- 0.529 L-F1 = 0.802 +/- 0.5636
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9
|
wdika/SEG_UNet3D_ISLES2022SubAcuteStroke
|
wdika
| 2024-03-06T10:41:13Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"UNet3D",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T17:59:04Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- UNet3D
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet3D_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet3D_ISLES2022SubAcuteStroke/blob/main/SEG_UNet3D_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATION3DUNET
segmentation_module: UNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 0.8206 +/- 2.167 AVD = 0.691 +/- 5.458 DICE = 0.6871 +/- 0.5468 L-F1 = 0.7982 +/- 0.5733
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9
|
wdika/SEG_AttentionUNet_SKMTEA
|
wdika
| 2024-03-06T10:41:00Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"AttentionUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T18:00:00Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- AttentionUNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_AttentionUNet_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_AttentionUNet_SKMTEA/blob/main/SEG_AttentionUNet_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONATTENTIONUNET
segmentation_module: AttentionUNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.909 +/- 0.08848 F1 = 0.6371 +/- 0.4746 HD95 = 6.358 +/- 2.209 IOU = 0.5294 +/- 0.3607
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
wdika/SEG_UNet_SKMTEA
|
wdika
| 2024-03-06T10:40:44Z | 0 | 0 |
atommic
|
[
"atommic",
"image-segmentation",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2024-03-05T18:00:43Z |
---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_SKMTEA/blob/main/SEG_UNet_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9123 +/- 0.05847 F1 = 0.6509 +/- 0.4487 HD95 = 6.618 +/- 1.793 IOU = 0.5158 +/- 0.3499
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022
|
YMKiii/output2
|
YMKiii
| 2024-03-06T10:40:11Z | 23 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:finetune:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T10:24:10Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: runwayml/stable-diffusion-v1-5
inference: true
instance_prompt: interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - YMKiii/output2
This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on interior design using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
velocity-engg/model2
|
velocity-engg
| 2024-03-06T10:29:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T10:29:22Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** velocity-engg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MoMonir/deepseek-coder-6.7b-chat-and-function-calling-GGUF
|
MoMonir
| 2024-03-06T10:28:00Z | 16 | 2 | null |
[
"gguf",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-03-03T11:13:10Z |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
Orignal Model: <a href="https://huggingface.co/AIGym/deepseek-coder-6.7b-chat-and-function-calling">AIGym/deepseek-coder-6.7b-chat-and-function-calling</a>
Note: This is an Experiment and not Tested
|
NandhiniMohan/dream-island
|
NandhiniMohan
| 2024-03-06T10:24:15Z | 0 | 0 | null |
[
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-06T10:23:17Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Dream-island Dreambooth model trained by NandhiniMohan following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22PGM012
Sample pictures of this concept:

|
AlignmentResearch/robust_llm_pythia-imdb-1b-mz-test-1gpu
|
AlignmentResearch
| 2024-03-06T10:24:14Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b-deduped",
"base_model:finetune:EleutherAI/pythia-1b-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T10:17:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b-deduped
model-index:
- name: robust_llm_pythia-imdb-1b-mz-test-1gpu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-1b-mz-test-1gpu
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Anbra/ppo-LunarLander-v2
|
Anbra
| 2024-03-06T10:23:02Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T10:22:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -177.99 +/- 108.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MoMonir/NeuralBeagle14-7B-GGUF
|
MoMonir
| 2024-03-06T10:21:29Z | 7 | 0 | null |
[
"gguf",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-01-26T16:09:55Z |
---
license: cc-by-nc-4.0
---
Quantize Test Repo<BR>
Orignal Model: <a href="https://huggingface.co/mlabonne/NeuralBeagle14-7B">mlabonne/NeuralBeagle14-7B</a>
Note: This is an Experiment and not Tested
|
grayhacker91/gemma-7b-open-platypus-commercial
|
grayhacker91
| 2024-03-06T10:20:26Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"ko",
"dataset:kyujinpy/Open-platypus-Commercial",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T09:30:32Z |
---
language:
- ko
datasets:
- kyujinpy/Open-platypus-Commercial
base_model: google/gemma-7b
library_name: transformers
pipeline_tag: text-generation
license: other
license_name: gemma-terms-of-use
license_link: LICENSE
---
# **gemma-7b-open-platypus-commercial**
## Model Details
**Base Model**
- google/gemma-7b (https://huggingface.co/google/gemma-7b)
**Training Dataset**
- kyujinpy/Open-platypus-Commercial (https://huggingface.co/datasets/kyujinpy/Open-platypus-Commercial)
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "grayhacker91/gemma-7b-open-platypus-commercial"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
---
|
Mayank1999/bert-finetuned-ner
|
Mayank1999
| 2024-03-06T10:13:57Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-03-06T10:03:51Z |
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Leelakrish/my-pet-lion-xzg
|
Leelakrish
| 2024-03-06T10:12:19Z | 0 | 0 | null |
[
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-03-06T10:10:10Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Lion-XZG Dreambooth model trained by Leelakrish following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21BRS1638
Sample pictures of this concept:

|
Hemg/Brain-Tumor-Classification
|
Hemg
| 2024-03-06T10:11:06Z | 38 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-03-06T05:51:46Z |
---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Brain-Tumor-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain-Tumor-Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0872
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2074 | 1.0 | 44 | 0.8060 | 0.8128 |
| 0.4897 | 2.0 | 88 | 0.3008 | 0.9274 |
| 0.2462 | 3.0 | 132 | 0.2464 | 0.9331 |
| 0.1937 | 4.0 | 176 | 0.1918 | 0.9502 |
| 0.1523 | 5.0 | 220 | 0.1699 | 0.9502 |
| 0.1371 | 6.0 | 264 | 0.1372 | 0.9644 |
| 0.1104 | 7.0 | 308 | 0.1121 | 0.9708 |
| 0.1097 | 8.0 | 352 | 0.1220 | 0.9651 |
| 0.1015 | 9.0 | 396 | 0.1053 | 0.9737 |
| 0.0841 | 10.0 | 440 | 0.1142 | 0.9708 |
| 0.0839 | 11.0 | 484 | 0.1073 | 0.9708 |
| 0.0771 | 12.0 | 528 | 0.1156 | 0.9665 |
| 0.074 | 13.0 | 572 | 0.1203 | 0.9644 |
| 0.0652 | 14.0 | 616 | 0.0706 | 0.9858 |
| 0.0694 | 15.0 | 660 | 0.0984 | 0.9744 |
| 0.0596 | 16.0 | 704 | 0.0872 | 0.9758 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
slukas99/tex_inv_af_dress
|
slukas99
| 2024-03-06T10:07:28Z | 10 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T08:47:39Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - slukas99/tex_inv_af_dress
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
hellie/newsroommodel
|
hellie
| 2024-03-06T10:02:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"pegasus",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-03-06T10:01:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gohzy/singlish-toxic-bert-LoRA-159000-1
|
gohzy
| 2024-03-06T09:59:31Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T09:58:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Gille/StrangeMerges_31-7B-slerp
|
Gille
| 2024-03-06T09:46:02Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Gille/StrangeMerges_30-7B-slerp",
"yam-peleg/Experiment24-7B",
"base_model:Gille/StrangeMerges_30-7B-slerp",
"base_model:merge:Gille/StrangeMerges_30-7B-slerp",
"base_model:yam-peleg/Experiment24-7B",
"base_model:merge:yam-peleg/Experiment24-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T09:39:22Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- Gille/StrangeMerges_30-7B-slerp
- yam-peleg/Experiment24-7B
base_model:
- Gille/StrangeMerges_30-7B-slerp
- yam-peleg/Experiment24-7B
---
# StrangeMerges_31-7B-slerp
StrangeMerges_31-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Gille/StrangeMerges_30-7B-slerp](https://huggingface.co/Gille/StrangeMerges_30-7B-slerp)
* [yam-peleg/Experiment24-7B](https://huggingface.co/yam-peleg/Experiment24-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Gille/StrangeMerges_30-7B-slerp
layer_range: [0, 32]
- model: yam-peleg/Experiment24-7B
layer_range: [0, 32]
merge_method: slerp
base_model: yam-peleg/Experiment24-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 0.5, 0.5, 0]
- filter: mlp
value: [0, 0.5, 0.7, 0.3, 0.5, 0.5, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Gille/StrangeMerges_31-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
hiiamsid/mistral_yt_transcribe_classification_opt_train
|
hiiamsid
| 2024-03-06T09:44:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T17:49:44Z |
---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.2
tags:
- generated_from_trainer
model-index:
- name: mistral_yt_transcribe_classification_opt_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_yt_transcribe_classification_opt_train
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0393 | 1.0 | 640 | 0.0381 |
| 0.0334 | 2.0 | 1281 | 0.0340 |
| 0.0226 | 3.0 | 1921 | 0.0343 |
| 0.0275 | 4.0 | 2560 | 0.0335 |
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
jelldps/malaysian-mistral-7b-32k-instructions-v4-gguf
|
jelldps
| 2024-03-06T09:41:56Z | 6 | 3 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation",
"conversational",
"ms",
"base_model:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"base_model:quantized:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-27T10:32:08Z |
---
base_model: mesolitica/malaysian-mistral-7b-32k-instructions-v3.5
language:
- ms
---
# malaysian-mistral-7b-32k-instructions-v4 - GGUF
- Model creator: [Mesolitica](https://huggingface.co/mesolitica)
- Original model: [malaysian-mistral-7b-32k-instructions-v4](https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4)
|
aparna-01/my-pet-cat-sdf
|
aparna-01
| 2024-03-06T09:32:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T09:28:45Z |
---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-SDF Dreambooth model trained by aparna-01 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23/CSE/111
Sample pictures of this concept:

|
AlignmentResearch/robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m
|
AlignmentResearch
| 2024-03-06T09:24:30Z | 7 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T09:24:23Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
s14pe/ppo-LunarLander-v2
|
s14pe
| 2024-03-06T09:23:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-05T14:14:49Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.00 +/- 15.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlignmentResearch/robust_llm_m857mz1i_from_EleutherAI_pythia-14m
|
AlignmentResearch
| 2024-03-06T09:23:12Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T09:23:05Z |
---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_m857mz1i_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_m857mz1i_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
mesolitica/reranker-malaysian-mistral-474M-32k
|
mesolitica
| 2024-03-06T09:19:24Z | 13 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-classification",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T06:13:37Z |
---
language:
- ms
---
# 474M 32768 context length Malaysian Mistral on Malaysian text reranker task
Trained on truncated 32k context length.
README at https://github.com/mesolitica/llm-embedding/tree/main/reranker
WandB, https://wandb.ai/kamarul-adha360/reranker-malaysian-mistral-474M-16384?nw=nwuserkamaruladha360
|
Yizhang888/corgy_mouse_LoRA
|
Yizhang888
| 2024-03-06T09:12:46Z | 1 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2024-01-17T10:10:22Z |
---
license: openrail++
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK computer mouse
---
# SDXL LoRA DreamBooth - Yizhang888/corgy_mouse_LoRA_5pictures
<Gallery />
## Model description
These are Yizhang888/corgy_mouse_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK computer mouse to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Yizhang888/corgy_mouse_LoRA/tree/main) them in the Files & versions tab.
|
DhairyaSarin/promotional-text-analyser-v2
|
DhairyaSarin
| 2024-03-06T09:11:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-03-06T09:10:46Z |
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
alibaba-pai/pai-bloom-1b1-text2prompt-sd
|
alibaba-pai
| 2024-03-06T09:07:42Z | 124 | 35 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-08T08:55:33Z |
---
license: apache-2.0
widget:
- text: "Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: 1 girl\nOutput:"
tags:
- pytorch
- transformers
- text-generation
---
# BeautifulPrompt
## 简介 Brief Introduction
我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。
We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate more beautiful images simply.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd')
model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd').eval().cuda()
raw_prompt = '1 girl'
input = f'Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: {raw_prompt}\nOutput:'
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
outputs = model.generate(
input_ids,
max_length=384,
do_sample=True,
temperature=1.0,
top_k=50,
top_p=0.95,
repetition_penalty=1.2,
num_return_sequences=5)
prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
prompts = [p.strip() for p in prompts]
print(prompts)
```
## 作品展示 Gallery
<style>
table th:first-of-type {
width: 50%;
}
table th:nth-of-type(2) {
width: 50%;
}
</style>
| Original | BeautifulPrompt |
| ---------------------------------------- | ---------------------------------- |
| prompt: taylor swift, country, golden, fearless,wavehair | prompt: portrait of taylor swift as a beautiful woman, long hair, country, golden ratio, intricate, symmetrical, cinematic lighting, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration |
|  |  |
| Original | BeautifulPrompt |
| ---------------------------------------- | ---------------------------------- |
| prompt: A majestic sailing ship | prompt: a massive sailing ship, epic, cinematic, artstation, greg rutkowski, james gurney, sparth |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
## Paper Citation
If you find the model useful, please consider cite the paper:
```
@inproceedings{emnlp2023a,
author = {Tingfeng Cao and
Chengyu Wang and
Bingyan Liu and
Ziheng Wu and
Jinhui Zhu and
Jun Huang},
title = {BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis},
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track},
pages = {1--11},
year = {2023}
}
```
|
zxhezexin/openlrm-mix-large-1.1
|
zxhezexin
| 2024-03-06T08:57:33Z | 45 | 6 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:57:53Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-obj-base-1.1
|
zxhezexin
| 2024-03-06T08:55:57Z | 25 | 2 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:38:17Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
zxhezexin/openlrm-obj-small-1.1
|
zxhezexin
| 2024-03-06T08:54:51Z | 50 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
image-to-3d
| 2024-03-04T06:35:29Z |
---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
ingeol/q2d
|
ingeol
| 2024-03-06T08:52:16Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-03-06T08:50:58Z |
---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# ingeol/q2d
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('ingeol/q2d')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('ingeol/q2d')
model = AutoModel.from_pretrained('ingeol/q2d')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=ingeol/q2d)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7797 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`beir.losses.bpr_loss.BPRLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 7000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
minhah/videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
|
minhah
| 2024-03-06T08:43:17Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:minhah/videomae-base-finetuned-ucf101-subset",
"base_model:finetune:minhah/videomae-base-finetuned-ucf101-subset",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2024-03-06T07:10:58Z |
---
license: cc-by-nc-4.0
base_model: minhah/videomae-base-finetuned-ucf101-subset
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
This model is a fine-tuned version of [minhah/videomae-base-finetuned-ucf101-subset](https://huggingface.co/minhah/videomae-base-finetuned-ucf101-subset) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6593
- Accuracy: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 576
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.729 | 0.13 | 73 | 1.6346 | 0.3408 |
| 1.683 | 1.13 | 146 | 1.6505 | 0.3029 |
| 1.6889 | 2.13 | 219 | 1.6359 | 0.3408 |
| 1.6853 | 3.13 | 292 | 1.6739 | 0.2398 |
| 1.5793 | 4.13 | 365 | 1.6679 | 0.2588 |
| 1.5783 | 5.13 | 438 | 1.6091 | 0.3324 |
| 1.5745 | 6.13 | 511 | 1.6306 | 0.3072 |
| 1.5704 | 7.11 | 576 | 1.6573 | 0.2707 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
YMKiii/output
|
YMKiii
| 2024-03-06T08:40:26Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-02-28T07:39:24Z |
---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: CompVis/stable-diffusion-v1-4
inference: true
instance_prompt: Interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - YMKiii/output
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on Interior design using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
AnonymousSub/FPDM_bertlarge_model
|
AnonymousSub
| 2024-03-06T08:32:26Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-06T08:30:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RaduGabriel/SirUkrainian
|
RaduGabriel
| 2024-03-06T08:29:21Z | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"RaduGabriel/MUZD",
"RaduGabriel/Mistral-Instruct-Ukrainian-SFT",
"Radu1999/MisterUkrainianDPO",
"CultriX/NeuralTrix-7B-dpo",
"base_model:CultriX/NeuralTrix-7B-dpo",
"base_model:merge:CultriX/NeuralTrix-7B-dpo",
"base_model:Radu1999/MisterUkrainianDPO",
"base_model:merge:Radu1999/MisterUkrainianDPO",
"base_model:RaduGabriel/MUZD",
"base_model:merge:RaduGabriel/MUZD",
"base_model:RaduGabriel/Mistral-Instruct-Ukrainian-SFT",
"base_model:merge:RaduGabriel/Mistral-Instruct-Ukrainian-SFT",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-02-15T15:37:44Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- RaduGabriel/MUZD
- RaduGabriel/Mistral-Instruct-Ukrainian-SFT
- Radu1999/MisterUkrainianDPO
- CultriX/NeuralTrix-7B-dpo
base_model:
- RaduGabriel/MUZD
- RaduGabriel/Mistral-Instruct-Ukrainian-SFT
- Radu1999/MisterUkrainianDPO
- CultriX/NeuralTrix-7B-dpo
model-index:
- name: SirUkrainian
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.54
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.14
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 68.74
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 81.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 56.71
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=RaduGabriel/SirUkrainian
name: Open LLM Leaderboard
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [RaduGabriel/MUZD](https://huggingface.co/RaduGabriel/MUZD)
* [RaduGabriel/Mistral-Instruct-Ukrainian-SFT](https://huggingface.co/RaduGabriel/Mistral-Instruct-Ukrainian-SFT)
* [Radu1999/MisterUkrainianDPO](https://huggingface.co/Radu1999/MisterUkrainianDPO)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
## 🧩 Configuration
```yaml
models:
- model: RaduGabriel/MUZD
parameters:
weight: 0.3
- model: RaduGabriel/Mistral-Instruct-Ukrainian-SFT
parameters:
weight: 0.3
- model: Radu1999/MisterUkrainianDPO
parameters:
weight: 0.1
- model: CultriX/NeuralTrix-7B-dpo
parameters:
weight: 0.3
merge_method: task_arithmetic
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "RaduGabriel/SirUkrainian"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_RaduGabriel__SirUkrainian)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.50|
|AI2 Reasoning Challenge (25-Shot)|67.32|
|HellaSwag (10-Shot) |85.54|
|MMLU (5-Shot) |63.14|
|TruthfulQA (0-shot) |68.74|
|Winogrande (5-shot) |81.53|
|GSM8k (5-shot) |56.71|
|
amazingvince/bitllama-goodwiki
|
amazingvince
| 2024-03-06T08:26:31Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:BEE-spoke-data/goodwiki-deduped-split",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-05T01:57:01Z |
---
tags:
- generated_from_trainer
datasets:
- BEE-spoke-data/goodwiki-deduped-split
metrics:
- accuracy
model-index:
- name: bitllama-goodwiki
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: BEE-spoke-data/goodwiki-deduped-split
type: BEE-spoke-data/goodwiki-deduped-split
metrics:
- name: Accuracy
type: accuracy
value: 0.4285134482793542
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bitllama-goodwiki
This model was trained from scratch on the BEE-spoke-data/goodwiki-deduped-split dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0525
- Accuracy: 0.4285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0008
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 6.1199 | 0.04 | 100 | 6.0749 | 0.1542 |
| 5.3869 | 0.07 | 200 | 5.3267 | 0.2032 |
| 4.9187 | 0.11 | 300 | 4.8566 | 0.2386 |
| 4.6185 | 0.14 | 400 | 4.5535 | 0.2624 |
| 4.3509 | 0.18 | 500 | 4.3388 | 0.2801 |
| 4.1666 | 0.21 | 600 | 4.1692 | 0.2956 |
| 4.0456 | 0.25 | 700 | 4.0399 | 0.3089 |
| 3.9273 | 0.28 | 800 | 3.9318 | 0.3193 |
| 3.8447 | 0.32 | 900 | 3.8173 | 0.3327 |
| 3.7143 | 0.35 | 1000 | 3.7108 | 0.3461 |
| 3.6485 | 0.39 | 1100 | 3.6116 | 0.3590 |
| 3.5171 | 0.42 | 1200 | 3.5303 | 0.3693 |
| 3.4464 | 0.46 | 1300 | 3.4554 | 0.3780 |
| 3.3955 | 0.49 | 1400 | 3.3999 | 0.3851 |
| 3.3551 | 0.53 | 1500 | 3.3432 | 0.3919 |
| 3.2787 | 0.56 | 1600 | 3.2981 | 0.3974 |
| 3.2705 | 0.6 | 1700 | 3.2566 | 0.4023 |
| 3.2281 | 0.64 | 1800 | 3.2172 | 0.4075 |
| 3.1759 | 0.67 | 1900 | 3.1826 | 0.4118 |
| 3.1603 | 0.71 | 2000 | 3.1547 | 0.4152 |
| 3.1328 | 0.74 | 2100 | 3.1283 | 0.4186 |
| 3.0916 | 0.78 | 2200 | 3.1055 | 0.4215 |
| 3.0939 | 0.81 | 2300 | 3.0875 | 0.4238 |
| 3.0584 | 0.85 | 2400 | 3.0732 | 0.4257 |
| 3.0711 | 0.88 | 2500 | 3.0631 | 0.4271 |
| 3.0612 | 0.92 | 2600 | 3.0565 | 0.4280 |
| 3.081 | 0.95 | 2700 | 3.0534 | 0.4284 |
| 3.0378 | 0.99 | 2800 | 3.0525 | 0.4285 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
aditya11997/test_prior
|
aditya11997
| 2024-03-06T08:22:33Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"kandinsky",
"text-to-image",
"diffusers-training",
"dataset:kbharat7/DogChestXrayDatasetNew",
"base_model:kandinsky-community/kandinsky-2-2-prior",
"base_model:finetune:kandinsky-community/kandinsky-2-2-prior",
"license:creativeml-openrail-m",
"diffusers:KandinskyV22PriorPipeline",
"region:us"
] |
text-to-image
| 2024-03-06T07:52:38Z |
---
license: creativeml-openrail-m
base_model: kandinsky-community/kandinsky-2-2-prior
datasets:
- kbharat7/DogChestXrayDatasetNew
tags:
- kandinsky
- text-to-image
- diffusers
- diffusers-training
inference: true
---
# Finetuning - aditya11997/test_prior
This pipeline was finetuned from **kandinsky-community/kandinsky-2-2-prior** on the **kbharat7/DogChestXrayDatasetNew** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['dogxraysmall']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipe_prior = DiffusionPipeline.from_pretrained("aditya11997/test_prior", torch_dtype=torch.float16)
pipe_t2i = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16)
prompt = "dogxraysmall"
image_embeds, negative_image_embeds = pipe_prior(prompt, guidance_scale=1.0).to_tuple()
image = pipe_t2i(image_embeds=image_embeds, negative_image_embeds=negative_image_embeds).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 1
* Learning rate: 1e-05
* Batch size: 1
* Gradient accumulation steps: 4
* Image resolution: 768
* Mixed-precision: None
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/aditya11997/text2image-fine-tune/runs/9j7m0fr8).
|
SenswiseData/berturk_cased_profanity
|
SenswiseData
| 2024-03-06T08:22:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T08:21:29Z |
---
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- Accuracy: 0.9635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 338 | 0.1606 | 0.9502 |
| 0.3717 | 2.0 | 676 | 0.1353 | 0.9635 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
VRSneha/kamal_camembert_dummy
|
VRSneha
| 2024-03-06T08:13:06Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-03-06T08:12:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
superlazycoder/NeuralPipe-7B-slerp
|
superlazycoder
| 2024-03-06T08:04:08Z | 56 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:merge:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:merge:mlabonne/NeuralHermes-2.5-Mistral-7B",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-11T22:24:06Z |
---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
base_model:
- OpenPipe/mistral-ft-optimized-1218
- mlabonne/NeuralHermes-2.5-Mistral-7B
model-index:
- name: NeuralPipe-7B-slerp
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 67.58
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.17
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.06
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 80.19
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.23
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=superlazycoder/NeuralPipe-7B-slerp
name: Open LLM Leaderboard
---
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "superlazycoder/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_superlazycoder__NeuralPipe-7B-slerp)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.01|
|AI2 Reasoning Challenge (25-Shot)|67.58|
|HellaSwag (10-Shot) |86.17|
|MMLU (5-Shot) |64.06|
|TruthfulQA (0-shot) |59.84|
|Winogrande (5-shot) |80.19|
|GSM8k (5-shot) |68.23|
|
Dangurangu/my-awesome-setfit-model
|
Dangurangu
| 2024-03-06T07:54:55Z | 6 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:SetFit/SentEval-CR",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-03-06T07:54:02Z |
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- SetFit/SentEval-CR
metrics:
- accuracy
widget:
- text: you can take pic of your friends and the picture will pop up when they call
.
- text: the speakerphone , the radio , all features work perfectly .
- text: 'a ) the picture quality ( color and sharpness of focusing ) are so great
, it completely eliminated my doubt about digital imaging -- - how could one eat
rice one grain at a time : - ) )'
- text: so far the dvd works so i hope it does n 't break down like the reviews i
've read .
- text: i have a couple hundred contacts and the menu loads within a few seconds ,
no big deal .
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: SetFit/SentEval-CR
type: SetFit/SentEval-CR
split: test
metrics:
- type: accuracy
value: 0.8804780876494024
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'* slick-looking design and improved interface'</li><li>'as for bluetooth , no problems at all .'</li><li>'2 ) storage capacity'</li></ul> |
| 0 | <ul><li>"the day finally arrived when i was sure i 'd leave sprint ."</li><li>"neither message was answered ( they ask for 24 hours before replying - i 've been waiting 27 days . )"</li><li>'only problem is that is a bit heavy .'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8805 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("dangurangu/my-awesome-setfit-model")
# Run inference
preds = model("the speakerphone , the radio , all features work perfectly .")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.0625 | 44 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 7 |
| 1 | 9 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2205 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
advaitadasein/blip2_test
|
advaitadasein
| 2024-03-06T07:53:29Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2023-09-15T12:42:27Z |
---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
duplicated_from: Salesforce/blip2-opt-2.7b
---
# BLIP-2, OPT-2.7b, pre-trained only
BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
#### Running the model on CPU
<details>
<summary> Click to expand </summary>
```python
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
#### Running the model on GPU
##### In full precision
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In half precision (`float16`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
##### In 8-bit precision (`int8`)
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate bitsandbytes
import torch
import requests
from PIL import Image
from transformers import Blip2Processor, Blip2ForConditionalGeneration
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto")
img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg'
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')
question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16)
out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True).strip())
```
</details>
|
alinerodrigues/wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
|
alinerodrigues
| 2024-03-06T07:49:28Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-03-06T04:33:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-1b-mecita-portuguese-all-clean-10
This model is a fine-tuned version of [jonatasgrosman/wav2vec2-xls-r-1b-portuguese](https://huggingface.co/jonatasgrosman/wav2vec2-xls-r-1b-portuguese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1514
- Wer: 0.0869
- Cer: 0.0287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 22.1247 | 1.0 | 67 | 2.8766 | 1.0 | 1.0 |
| 4.814 | 2.0 | 134 | 0.3766 | 0.2506 | 0.0658 |
| 0.8372 | 3.0 | 201 | 0.2164 | 0.1177 | 0.0368 |
| 0.8372 | 4.0 | 268 | 0.1876 | 0.1106 | 0.0331 |
| 0.294 | 5.0 | 335 | 0.1951 | 0.1011 | 0.0325 |
| 0.2405 | 6.0 | 402 | 0.1718 | 0.0957 | 0.0300 |
| 0.2405 | 7.0 | 469 | 0.1647 | 0.0947 | 0.0296 |
| 0.1998 | 8.0 | 536 | 0.1709 | 0.0950 | 0.0305 |
| 0.1946 | 9.0 | 603 | 0.1730 | 0.0906 | 0.0299 |
| 0.1946 | 10.0 | 670 | 0.1695 | 0.0876 | 0.0289 |
| 0.1938 | 11.0 | 737 | 0.1649 | 0.0852 | 0.0282 |
| 0.1667 | 12.0 | 804 | 0.1644 | 0.0869 | 0.0280 |
| 0.1667 | 13.0 | 871 | 0.1534 | 0.0842 | 0.0275 |
| 0.163 | 14.0 | 938 | 0.1514 | 0.0869 | 0.0287 |
| 0.1568 | 15.0 | 1005 | 0.1583 | 0.0873 | 0.0287 |
| 0.1568 | 16.0 | 1072 | 0.1655 | 0.0856 | 0.0283 |
| 0.1465 | 17.0 | 1139 | 0.1691 | 0.0859 | 0.0272 |
| 0.138 | 18.0 | 1206 | 0.1777 | 0.0906 | 0.0290 |
| 0.138 | 19.0 | 1273 | 0.1652 | 0.0859 | 0.0280 |
| 0.1251 | 20.0 | 1340 | 0.1715 | 0.0856 | 0.0275 |
| 0.136 | 21.0 | 1407 | 0.1614 | 0.0832 | 0.0267 |
| 0.136 | 22.0 | 1474 | 0.1579 | 0.0805 | 0.0262 |
| 0.1179 | 23.0 | 1541 | 0.1777 | 0.0842 | 0.0277 |
| 0.1029 | 24.0 | 1608 | 0.1761 | 0.0825 | 0.0274 |
| 0.1029 | 25.0 | 1675 | 0.1665 | 0.0839 | 0.0275 |
| 0.1139 | 26.0 | 1742 | 0.1821 | 0.0801 | 0.0279 |
| 0.1019 | 27.0 | 1809 | 0.1807 | 0.0856 | 0.0279 |
| 0.1019 | 28.0 | 1876 | 0.1883 | 0.0812 | 0.0273 |
| 0.0911 | 29.0 | 1943 | 0.1904 | 0.0808 | 0.0272 |
| 0.0919 | 30.0 | 2010 | 0.1839 | 0.0862 | 0.0285 |
| 0.0919 | 31.0 | 2077 | 0.1902 | 0.0852 | 0.0282 |
| 0.084 | 32.0 | 2144 | 0.1934 | 0.0822 | 0.0275 |
| 0.0809 | 33.0 | 2211 | 0.2050 | 0.0822 | 0.0282 |
| 0.0809 | 34.0 | 2278 | 0.1955 | 0.0832 | 0.0281 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.13.3
|
DooDooHyun/AIFT-42dot_LLM-PLM-1.3B-v1.51
|
DooDooHyun
| 2024-03-06T07:43:13Z | 2,249 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:42dot/42dot_LLM-PLM-1.3B",
"base_model:finetune:42dot/42dot_LLM-PLM-1.3B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:39:38Z |
---
license: cc-by-nc-4.0
base_model: 42dot/42dot_LLM-PLM-1.3B
tags:
- generated_from_trainer
model-index:
- name: AIFT-42dot_LLM-PLM-1.3B-v1.51
results: []
---
# AIFT-42dot_LLM-PLM-1.3B-v1.51
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
|
ITT-AF
| 2024-03-06T07:40:07Z | 60 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:35:07Z |
---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0
|
jungyuko/DAVinCI-42dot_LLM-PLM-1.3B-v1.5.3
|
jungyuko
| 2024-03-06T07:39:04Z | 2,270 | 1 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:34:02Z |
---
license: cc-by-nc-4.0
---
## DAVinCI-42dot_LLM-PLM-1.3B-v1.5.3
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on a custom dataset.
### Model description
More information needed
### Intended uses & limitations
More information needed
### Training and evaluation data
More information needed
### Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
* learning_rate: 2e-05
* train_batch_size: 24
* eval_batch_size: 8
* seed: 42
* gradient_accumulation_steps: 4
* total_train_batch_size: 96
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr_scheduler_type: linear
* num_epochs: 3.0
* mixed_precision_training: Native AMP
### Training results
### Framework versions
* Transformers 4.36.2
* Pytorch 2.1.2+cu121
* Datasets 2.0.0
* Tokenizers 0.15.0
|
venkatarajendra/rm-falcon-7b
|
venkatarajendra
| 2024-03-06T07:34:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-03-06T07:33:47Z |
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0
|
AbstractPerspective/Phi-2_MoE_GDPR
|
AbstractPerspective
| 2024-03-06T07:33:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:30:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anum231/food_classifier
|
anum231
| 2024-03-06T07:26:58Z | 46 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:anum231/cancer_classifier_100",
"base_model:finetune:anum231/cancer_classifier_100",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-27T05:41:37Z |
---
license: apache-2.0
base_model: anum231/cancer_classifier_100
tags:
- generated_from_keras_callback
model-index:
- name: anum231/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# anum231/food_classifier
This model is a fine-tuned version of [anum231/cancer_classifier_100](https://huggingface.co/anum231/cancer_classifier_100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5815
- Validation Loss: 0.4561
- Train Accuracy: 0.8276
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6210 | 0.4706 | 0.8276 | 0 |
| 0.6095 | 0.4583 | 0.8103 | 1 |
| 0.6289 | 0.4566 | 0.8103 | 2 |
| 0.6230 | 0.5850 | 0.7241 | 3 |
| 0.5815 | 0.4561 | 0.8276 | 4 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
yashwantk/detr-resnet-50_finetuned_cppe5
|
yashwantk
| 2024-03-06T07:21:46Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2024-02-21T13:34:09Z |
---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: detr-resnet-50_finetuned_cppe5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-resnet-50_finetuned_cppe5
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
light77/gemma-Code-Instruct-Finetune-test-0.3
|
light77
| 2024-03-06T07:19:01Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:15:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOOwO/eacc_dc_5
|
OwOOwO
| 2024-03-06T07:17:53Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T07:15:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kunger/Sakura-13B-Qwen2beta-v0.9-GGUF
|
Kunger
| 2024-03-06T07:17:15Z | 12 | 0 | null |
[
"gguf",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T06:23:57Z |
---
license: cc-by-nc-sa-4.0
---
原始模型:`https://huggingface.co/SakuraLLM/Sakura-13B-Qwen2beta-v0.9`
LLAMA.CPP直接转换,未经测试
|
Kunger/Sakura-13B-Qwen2beta-v0.9-4bit-AWQ
|
Kunger
| 2024-03-06T07:17:01Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-03-01T12:13:21Z |
---
license: cc-by-nc-sa-4.0
---
原始模型:`https://huggingface.co/SakuraLLM/Sakura-13B-Qwen2beta-v0.9`
4Bit AWQ量化,未测试,不建议使用。
|
Hadiboo/boguey
|
Hadiboo
| 2024-03-06T07:16:09Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"art",
"text-generation-inference",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"region:us"
] |
text-generation
| 2024-03-06T07:13:10Z |
---
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- code
- art
- text-generation-inference
---
|
Sumail/Golden_Waves04_2b
|
Sumail
| 2024-03-06T07:13:37Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sumail/Bubble_bee04_2b",
"base_model:finetune:Sumail/Bubble_bee04_2b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-03-06T06:42:38Z |
---
base_model:
- 0x0dad0/nous_nb00
- Sumail/Bubble_bee04_2b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [0x0dad0/nous_nb00](https://huggingface.co/0x0dad0/nous_nb00)
* [Sumail/Bubble_bee04_2b](https://huggingface.co/Sumail/Bubble_bee04_2b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb00
layer_range: [0, 18]
- model: Sumail/Bubble_bee04_2b
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb00
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
christti/clip-vit-bert-coco
|
christti
| 2024-03-06T07:10:08Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-text-dual-encoder",
"feature-extraction",
"generated_from_trainer",
"dataset:ydshieh/coco_dataset_script",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2024-03-05T12:13:05Z |
---
tags:
- generated_from_trainer
datasets:
- ydshieh/coco_dataset_script
model-index:
- name: clip-vit-bert-coco
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clip-vit-bert-coco
This model was trained from scratch on the ydshieh/coco_dataset_script 2017 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 4.1888
- eval_runtime: 5.7658
- eval_samples_per_second: 22.2
- eval_steps_per_second: 0.173
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ottopilot/PriyaBelleXL
|
ottopilot
| 2024-03-06T07:09:25Z | 4 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-nc-nd-4.0",
"region:us"
] |
text-to-image
| 2024-03-06T07:07:58Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
RAW photo, portrait, close-up, PriBlle, looking at viewer, smiling, perfect
black hair with highlights, brown eyes, professional headshot, shot on
Hasselblad, perfect lighting, dutch angle, bokeh, outdoors, depth of field,
blue dress, warm, loving, friendly <lora:PriyaBelleXL_v1:1>
parameters:
negative_prompt: bindi, mole, facial marks
output:
url: images/00001-3916971016.png
- text: >-
PriBlle, very dark-skinned woman, solo focus, mixed media, realistic anime
art style, art by Yusuke Nakamura, fractal, ukiyoe, watercolor ink wash
technique, intricate, highly detailed. Inspired by multiracial Hindi-West
Indian heritage, San Francisco Bay Area, and diaspora.
<lora:PriyaBelleXL_v1:1>
output:
url: images/00002-2902012777.png
- text: >-
PriBlle as Princess Jasmine, mind controlled by Jafar, sexy red outfit,
tiara, collar, Agrabah palace, entranced by magic:1.1, glowing, compliant,
submissive, obedient, Disney's Aladdin bad end <lora:PriyaBelleXL_v1:1>
output:
url: images/00121-3666660946.png
- text: >-
PriBlle is a college student on campus, dark blue and gold hooded sweatshirt
with bear logo and shorts, Berkeley <lora:PriyaBelleXL_v1:1>
output:
url: images/00172-3938050706.png
- text: >-
PriBlle is hella fine shawty, hyphy, outdoors, Lake Merritt, Oakland,
NorCal, yay area <lora:PriyaBelleXL_v1:1>
output:
url: images/00156-519328175.png
- text: >-
PriBlle, a woman wearing a green Oakland Athletics cap and sexy fan gear,
smiling, ponytail, bodycon, bedroom, natural light, sexy, tease, flirty
<lora:PriyaBelleXL_v1:1>
output:
url: images/00328-1196258457.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PriBlle
license: cc-by-nc-nd-4.0
---
# Priya Belle (Ottoverse original character) - SDXL 1.0
<Gallery />
## Model description
https://huggingface.co/ottopilot/PriyaBelle, but trained for SDXL.
## Trigger words
You should use `PriBlle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ottopilot/PriyaBelleXL/tree/main) them in the Files & versions tab.
|
mahiatlinux/MasherAI-7B-v0.9-GGUF
|
mahiatlinux
| 2024-03-06T06:59:17Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:openchat/openchat-3.5-0106",
"base_model:quantized:openchat/openchat-3.5-0106",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T06:57:11Z |
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: openchat/openchat-3.5-0106
---
# Uploaded model
- **Developed by:** mahiatlinux
- **License:** apache-2.0
- **Finetuned from model :** openchat/openchat-3.5-0106
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JesseStover/L2AI-dictionary-klue-bert-base
|
JesseStover
| 2024-03-06T06:47:19Z | 89 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2024-03-04T13:52:44Z |
---
{}
---
The L2AI-dictionary model is fine-tuned checkpoint of [klue/bert-base](https://huggingface.co/klue/bert-base) for multiple choice, specifically for selecting the best dictionary definition of a given word in a sentence. Below is an example usage:
```python
import numpy as np
import torch
from transformers import AutoModelForMultipleChoice, AutoTokenizer
model_name = "JesseStover/L2AI-dictionary-klue-bert-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name)
model.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
prompts = "\"강아지는 뽀송뽀송하다.\"에 있는 \"강아지\"의 정의는 "
candidates = [
"\"(명사) 개의 새끼\"예요.",
"\"(명사) 부모나 할아버지, 할머니가 자식이나 손주를 귀여워하면서 부르는 말\"이예요."
]
inputs = tokenizer(
[[prompt, candidate] for candidate in candidates],
return_tensors="pt",
padding=True
)
labels = torch.tensor(0).unsqueeze(0)
with torch.no_grad():
outputs = model(
**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels
)
print({i: float(x) for i, x in enumerate(outputs.logits.softmax(1)[0])})
```
Training data was procured under Creative Commons [CC BY-SA 2.0 KR DEED](https://creativecommons.org/licenses/by-sa/2.0/kr/) from the National Institute of Korean Language's [Basic Korean Dictionary](https://krdict.korean.go.kr) and [Standard Korean Dictionary](https://stdict.korean.go.kr/).
|
vsocrates/incar-status-any
|
vsocrates
| 2024-03-06T06:44:25Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"text-classification",
"medical",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-03-06T05:07:27Z |
---
'[object Object]': null
license: apache-2.0
language:
- en
library_name: transformers
tags:
- medical
widget:
- text: "Patient is a a formerly incarcerated individual having arrived in the ED with stomach pain."
- example_title: "Former Incarceration"
- text: "Patient arrived in the ED for chest pain."
- example_title: "No Incarceration"
---
# Model Card for incar-status-any
A Clinical Longformer-based model trained by the HAIL lab to predict incarceration status (past and present) in ED Notes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Vimig Socrates
- **Model type:** Longformer
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Finetuned from model:** [Clinical Lonformer](https://huggingface.co/yikuan8/Clinical-Longformer
)
## Uses
This model can be used to predict the incarceration status that a patient might have given most types of clinical ED notes.
## Bias, Risks, and Limitations
This should not be used directly without supervision from a physician as predicting incarceration status incorrectly can have significant negative social and clinical impacts.
## Training Details
### Training Data
This model was trained on custom annotated data labeled for incarceration status from Yale-New Haven Health Hospital System ED Notes.
### Training Procedure
## Evaluation
TODO
### Testing Data, Factors & Metrics
### Results
TODO
]
## Citation [optional]
Coming soon!
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Model Card Authors [optional]
Vimig Socrates
## Model Card Contact
Vimig Socrates: [[email protected]](mailto:[email protected])
|
Demo0203/gyx
|
Demo0203
| 2024-03-06T06:39:57Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-03-06T06:35:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.17 +/- 14.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GregoRio123/nsy
|
GregoRio123
| 2024-03-06T06:39:07Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T05:42:51Z |
---
license: creativeml-openrail-m
---
|
therealvul/StyleTTS2
|
therealvul
| 2024-03-06T06:36:52Z | 0 | 1 | null |
[
"en",
"dataset:therealvul/StyleTTS2MLP",
"region:us"
] | null | 2023-11-03T09:37:12Z |
---
datasets:
- therealvul/StyleTTS2MLP
language:
- en
---
This repository contains StyleTTS2 models trained on Pony Preservation Project data
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.