Luis Oala commited on
Commit
cb07cb2
·
unverified ·
1 Parent(s): 423f073

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -0
README.md CHANGED
@@ -54,6 +54,70 @@ We noticed that PyPi package for `segmentation_models_pytorch` is sometimes behi
54
  foo@bar:~$ python -m pip install git+https://github.com/qubvel/segmentation_models.pytorch
55
  ```
56
  ### Recreate experiments
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ## Virtual lab log
58
  We maintain a collaborative virtual lab log at [this address](http://deplo-mlflo-1ssxo94f973sj-890390d809901dbf.elb.eu-central-1.amazonaws.com/#/). There you can browse experiment runs, analyze results through SQL queries and download trained processing and task models.
59
  <p align="center">
 
54
  foo@bar:~$ python -m pip install git+https://github.com/qubvel/segmentation_models.pytorch
55
  ```
56
  ### Recreate experiments
57
+ The central file for using the **Lens2Logit** framework for experiments as in the paper is `train.py` which provides a rich set of arguments to experiment with raw image data, different image processing models and task models for regression or classification. Below we provide three example prompts for the type of experiments reported in the [paper](https://openreview.net/forum?id=DRAywM1BhU)
58
+ #### Controlled synthesis of hardware-drift test cases
59
+ ```console
60
+ foo@bar:~$ python train.py \
61
+ --experiment_name YOUR-EXPERIMENT-NAME \
62
+ --run_name YOUR-RUN-NAME \
63
+ --dataset Microscopy \
64
+ --lr 1e-5 \
65
+ --n_splits 5 \
66
+ --epochs 5 \
67
+ --classifier_pretrained \
68
+ --processing_mode static \
69
+ --augmentation weak \
70
+ --log_model True \
71
+ --iso 0.01 \
72
+ --freeze_processor \
73
+ --processor_uri "$processor_uri" \
74
+ --track_processing \
75
+ --track_every_epoch \
76
+ --track_predictions \
77
+ --track_processing_gradients \
78
+ --track_save_tensors \
79
+ ```
80
+ #### Modular hardware-drift forensics
81
+ ```console
82
+ foo@bar:~$ python train.py \
83
+ --experiment_name YOUR-EXPERIMENT-NAME \
84
+ --run_name YOUR-RUN-NAME \
85
+ --dataset Microscopy \
86
+ --adv_training
87
+ --lr 1e-5 \
88
+ --n_splits 5 \
89
+ --epochs 5 \
90
+ --classifier_pretrained \
91
+ --processing_mode parametrized \
92
+ --augmentation weak \
93
+ --log_model True \
94
+ --iso 0.01 \
95
+ --track_processing \
96
+ --track_every_epoch \
97
+ --track_predictions \
98
+ --track_processing_gradients \
99
+ --track_save_tensors \
100
+ ```
101
+ #### Image processing customization
102
+ ```console
103
+ foo@bar:~$ python train.py \
104
+ --experiment_name YOUR-EXPERIMENT-NAME \
105
+ --run_name YOUR-RUN-NAME \
106
+ --dataset Microscopy \
107
+ --lr 1e-5 \
108
+ --n_splits 5 \
109
+ --epochs 5 \
110
+ --classifier_pretrained \
111
+ --processing_mode parametrized \
112
+ --augmentation weak \
113
+ --log_model True \
114
+ --iso 0.01 \
115
+ --track_processing \
116
+ --track_every_epoch \
117
+ --track_predictions \
118
+ --track_processing_gradients \
119
+ --track_save_tensors \
120
+ ```
121
  ## Virtual lab log
122
  We maintain a collaborative virtual lab log at [this address](http://deplo-mlflo-1ssxo94f973sj-890390d809901dbf.elb.eu-central-1.amazonaws.com/#/). There you can browse experiment runs, analyze results through SQL queries and download trained processing and task models.
123
  <p align="center">