Luis Oala
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -134,7 +134,7 @@ We maintain a collaborative virtual lab log at [this address](http://deplo-mlflo
|
|
134 |
|
135 |
|
136 |
### Review our experiments
|
137 |
-
Experiments are listed in the left column. You can select individual runs or compare metrics and parameters across different runs. For runs where we tracked images of intermediate processing steps and images of the gradients at these processing steps you can find at the bottom of a run page in the *results* folder for each epoch.
|
138 |
|
139 |
| Name of experiment in paper | Name of experiment in virtual lab log |
|
140 |
| :-------------: | :-----:|
|
@@ -142,6 +142,8 @@ Experiments are listed in the left column. You can select individual runs or com
|
|
142 |
| 5.2 Modular hardware-drift forensics | 2 Modular hardware-drift forensics |
|
143 |
| 5.3 Image processing customization | 3 Image processing customization (Microscopy), 3 Image processing customization (Drone) |
|
144 |
|
|
|
|
|
145 |
### Use our trained models
|
146 |
When selecting a run and a model was saved you can find the model files, state dictionary and instructions to load at the bottom of a run page under *models*. In the menu bar at the top of the virtual lab log you can also access models via the *Model Registry*. Our code is well integrated with the *mlflow* autologging and -loading package for PyTorch. So when using our code you can just specify the *model uri* as an argument and models will be fetched from the model registry automatically.
|
147 |
|
|
|
134 |
|
135 |
|
136 |
### Review our experiments
|
137 |
+
Experiments are listed in the left column. You can select individual runs or compare metrics and parameters across different runs. For runs where we tracked images of intermediate processing steps and images of the gradients at these processing steps you can find at the bottom of a run page in the *results* folder for each epoch. For better overview we include a map between the names of experiments in the paper and names of experiments in the virtual lab log:
|
138 |
|
139 |
| Name of experiment in paper | Name of experiment in virtual lab log |
|
140 |
| :-------------: | :-----:|
|
|
|
142 |
| 5.2 Modular hardware-drift forensics | 2 Modular hardware-drift forensics |
|
143 |
| 5.3 Image processing customization | 3 Image processing customization (Microscopy), 3 Image processing customization (Drone) |
|
144 |
|
145 |
+
Not that the virtual lab log includes many additional experiments.
|
146 |
+
|
147 |
### Use our trained models
|
148 |
When selecting a run and a model was saved you can find the model files, state dictionary and instructions to load at the bottom of a run page under *models*. In the menu bar at the top of the virtual lab log you can also access models via the *Model Registry*. Our code is well integrated with the *mlflow* autologging and -loading package for PyTorch. So when using our code you can just specify the *model uri* as an argument and models will be fetched from the model registry automatically.
|
149 |
|