Luis Oala commited on
Commit
88372c2
·
unverified ·
1 Parent(s): c263f7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -135,6 +135,13 @@ We maintain a collaborative virtual lab log at [this address](http://deplo-mlflo
135
 
136
  ### Review our experiments
137
  Experiments are listed in the left column. You can select individual runs or compare metrics and parameters across different runs. For runs where we tracked images of intermediate processing steps and images of the gradients at these processing steps you can find at the bottom of a run page in the *results* folder for each epoch.
 
 
 
 
 
 
 
138
  ### Use our trained models
139
  When selecting a run and a model was saved you can find the model files, state dictionary and instructions to load at the bottom of a run page under *models*. In the menu bar at the top of the virtual lab log you can also access models via the *Model Registry*. Our code is well integrated with the *mlflow* autologging and -loading package for PyTorch. So when using our code you can just specify the *model uri* as an argument and models will be fetched from the model registry automatically.
140
 
 
135
 
136
  ### Review our experiments
137
  Experiments are listed in the left column. You can select individual runs or compare metrics and parameters across different runs. For runs where we tracked images of intermediate processing steps and images of the gradients at these processing steps you can find at the bottom of a run page in the *results* folder for each epoch.
138
+
139
+ | Name of experiment in paper | Name of experiment in virtual lab log |
140
+ | :-------------: | :-----:|
141
+ | 5.1 Controlled synthesis of hardware-drift test cases | 1 Controlled synthesis of hardware-drift test cases (Train) , 1 Controlled synthesis of hardware-drift test cases (Test)|
142
+ | 5.2 Modular hardware-drift forensics | 2 Modular hardware-drift forensics |
143
+ | 5.3 Image processing customization | 3 Image processing customization (Microscopy), 3 Image processing customization (Drone) |
144
+
145
  ### Use our trained models
146
  When selecting a run and a model was saved you can find the model files, state dictionary and instructions to load at the bottom of a run page under *models*. In the menu bar at the top of the virtual lab log you can also access models via the *Model Registry*. Our code is well integrated with the *mlflow* autologging and -loading package for PyTorch. So when using our code you can just specify the *model uri* as an argument and models will be fetched from the model registry automatically.
147