Spaces:
Sleeping
Sleeping
Commit
·
d90120f
1
Parent(s):
a1e153e
Fix formatting warnings
Browse files- docs/api-advanced.md +2 -1
- docs/api.md +1 -1
- docs/examples.md +21 -7
- docs/generate_papers.py +1 -0
- docs/options.md +27 -8
docs/api-advanced.md
CHANGED
|
@@ -1,6 +1,7 @@
|
|
| 1 |
# Internal Reference
|
| 2 |
|
| 3 |
## Julia Interface
|
|
|
|
| 4 |
::: pysr.julia_helpers
|
| 5 |
options:
|
| 6 |
members:
|
|
@@ -34,4 +35,4 @@
|
|
| 34 |
options:
|
| 35 |
members:
|
| 36 |
- sympy2torch
|
| 37 |
-
heading_level: 3
|
|
|
|
| 1 |
# Internal Reference
|
| 2 |
|
| 3 |
## Julia Interface
|
| 4 |
+
|
| 5 |
::: pysr.julia_helpers
|
| 6 |
options:
|
| 7 |
members:
|
|
|
|
| 35 |
options:
|
| 36 |
members:
|
| 37 |
- sympy2torch
|
| 38 |
+
heading_level: 3
|
docs/api.md
CHANGED
|
@@ -13,4 +13,4 @@
|
|
| 13 |
- latex_table
|
| 14 |
- refresh
|
| 15 |
show_root_members_full_path: true
|
| 16 |
-
heading_level: 2
|
|
|
|
| 13 |
- latex_table
|
| 14 |
- refresh
|
| 15 |
show_root_members_full_path: true
|
| 16 |
+
heading_level: 2
|
docs/examples.md
CHANGED
|
@@ -1,16 +1,15 @@
|
|
| 1 |
# Toy Examples with Code
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
```python
|
| 6 |
import numpy as np
|
| 7 |
from pysr import *
|
| 8 |
```
|
| 9 |
|
| 10 |
-
|
| 11 |
## 1. Simple search
|
| 12 |
|
| 13 |
-
Here's a simple example where we
|
| 14 |
find the expression `2 cos(x3) + x0^2 - 2`.
|
| 15 |
|
| 16 |
```python
|
|
@@ -40,6 +39,7 @@ print(model)
|
|
| 40 |
|
| 41 |
Here, we do the same thing, but with multiple expressions at once,
|
| 42 |
each requiring a different feature.
|
|
|
|
| 43 |
```python
|
| 44 |
X = 2 * np.random.randn(100, 5)
|
| 45 |
y = 1 / X[:, [0, 1, 2]]
|
|
@@ -60,22 +60,26 @@ function:
|
|
| 60 |
model.set_params(extra_sympy_mappings={"inv": lambda x: 1/x})
|
| 61 |
model.sympy()
|
| 62 |
```
|
|
|
|
| 63 |
If you look at the lists of expressions before and after, you will
|
| 64 |
see that the sympy format now has replaced `inv` with `1/`.
|
| 65 |
We can again look at the equation chosen:
|
|
|
|
| 66 |
```python
|
| 67 |
print(model)
|
| 68 |
```
|
| 69 |
|
| 70 |
For now, let's consider the expressions for output 0.
|
| 71 |
We can see the LaTeX version of this with:
|
|
|
|
| 72 |
```python
|
| 73 |
model.latex()[0]
|
| 74 |
```
|
| 75 |
-
or output 1 with `model.latex()[1]`.
|
| 76 |
|
|
|
|
| 77 |
|
| 78 |
Let's plot the prediction against the truth:
|
|
|
|
| 79 |
```python
|
| 80 |
from matplotlib import pyplot as plt
|
| 81 |
plt.scatter(y[:, 0], model(X)[:, 0])
|
|
@@ -83,9 +87,10 @@ plt.xlabel('Truth')
|
|
| 83 |
plt.ylabel('Prediction')
|
| 84 |
plt.show()
|
| 85 |
```
|
|
|
|
| 86 |
Which gives us:
|
| 87 |
|
| 88 |
-

|
| 109 |
y = X[:, 3]**2 - X[:, 19]**2 + 1.5
|
| 110 |
```
|
| 111 |
|
| 112 |
Let's create a model with the feature selection argument set up:
|
|
|
|
| 113 |
```python
|
| 114 |
model = PySRRegressor(
|
| 115 |
binary_operators=["+", "-", "*", "/"],
|
|
@@ -117,15 +124,19 @@ model = PySRRegressor(
|
|
| 117 |
select_k_features=5,
|
| 118 |
)
|
| 119 |
```
|
|
|
|
| 120 |
Now let's fit this:
|
|
|
|
| 121 |
```python
|
| 122 |
model.fit(X, y)
|
| 123 |
```
|
| 124 |
|
| 125 |
Before the Julia backend is launched, you can see the string:
|
| 126 |
-
|
|
|
|
| 127 |
Using features ['x3', 'x5', 'x7', 'x19', 'x21']
|
| 128 |
```
|
|
|
|
| 129 |
which indicates that the feature selection (powered by a gradient-boosting tree)
|
| 130 |
has successfully selected the relevant two features.
|
| 131 |
|
|
@@ -152,6 +163,7 @@ set the parameter `denoise=True`. This will fit a Gaussian process (containing a
|
|
| 152 |
to the input dataset, and predict new targets (which are assumed to be denoised) from that Gaussian process.
|
| 153 |
|
| 154 |
For example:
|
|
|
|
| 155 |
```python
|
| 156 |
X = np.random.randn(100, 5)
|
| 157 |
noise = np.random.randn(100) * 0.1
|
|
@@ -159,6 +171,7 @@ y = np.exp(X[:, 0]) + X[:, 1] + X[:, 2] + noise
|
|
| 159 |
```
|
| 160 |
|
| 161 |
Let's create and fit a model with the denoising argument set up:
|
|
|
|
| 162 |
```python
|
| 163 |
model = PySRRegressor(
|
| 164 |
binary_operators=["+", "-", "*", "/"],
|
|
@@ -168,9 +181,10 @@ model = PySRRegressor(
|
|
| 168 |
model.fit(X, y)
|
| 169 |
print(model)
|
| 170 |
```
|
|
|
|
| 171 |
If all goes well, you should find that it predicts the correct input equation, without the noise term!
|
| 172 |
|
| 173 |
## 7. Additional features
|
| 174 |
|
| 175 |
For the many other features available in PySR, please
|
| 176 |
-
read the [Options section](options.md).
|
|
|
|
| 1 |
# Toy Examples with Code
|
| 2 |
|
| 3 |
+
## Preamble
|
| 4 |
|
| 5 |
```python
|
| 6 |
import numpy as np
|
| 7 |
from pysr import *
|
| 8 |
```
|
| 9 |
|
|
|
|
| 10 |
## 1. Simple search
|
| 11 |
|
| 12 |
+
Here's a simple example where we
|
| 13 |
find the expression `2 cos(x3) + x0^2 - 2`.
|
| 14 |
|
| 15 |
```python
|
|
|
|
| 39 |
|
| 40 |
Here, we do the same thing, but with multiple expressions at once,
|
| 41 |
each requiring a different feature.
|
| 42 |
+
|
| 43 |
```python
|
| 44 |
X = 2 * np.random.randn(100, 5)
|
| 45 |
y = 1 / X[:, [0, 1, 2]]
|
|
|
|
| 60 |
model.set_params(extra_sympy_mappings={"inv": lambda x: 1/x})
|
| 61 |
model.sympy()
|
| 62 |
```
|
| 63 |
+
|
| 64 |
If you look at the lists of expressions before and after, you will
|
| 65 |
see that the sympy format now has replaced `inv` with `1/`.
|
| 66 |
We can again look at the equation chosen:
|
| 67 |
+
|
| 68 |
```python
|
| 69 |
print(model)
|
| 70 |
```
|
| 71 |
|
| 72 |
For now, let's consider the expressions for output 0.
|
| 73 |
We can see the LaTeX version of this with:
|
| 74 |
+
|
| 75 |
```python
|
| 76 |
model.latex()[0]
|
| 77 |
```
|
|
|
|
| 78 |
|
| 79 |
+
or output 1 with `model.latex()[1]`.
|
| 80 |
|
| 81 |
Let's plot the prediction against the truth:
|
| 82 |
+
|
| 83 |
```python
|
| 84 |
from matplotlib import pyplot as plt
|
| 85 |
plt.scatter(y[:, 0], model(X)[:, 0])
|
|
|
|
| 87 |
plt.ylabel('Prediction')
|
| 88 |
plt.show()
|
| 89 |
```
|
| 90 |
+
|
| 91 |
Which gives us:
|
| 92 |
|
| 93 |
+

|
| 94 |
|
| 95 |
## 5. Feature selection
|
| 96 |
|
|
|
|
| 109 |
|
| 110 |
Here is an example. Let's say we have 30 input features and 300 data points, but only 2
|
| 111 |
of those features are actually used:
|
| 112 |
+
|
| 113 |
```python
|
| 114 |
X = np.random.randn(300, 30)
|
| 115 |
y = X[:, 3]**2 - X[:, 19]**2 + 1.5
|
| 116 |
```
|
| 117 |
|
| 118 |
Let's create a model with the feature selection argument set up:
|
| 119 |
+
|
| 120 |
```python
|
| 121 |
model = PySRRegressor(
|
| 122 |
binary_operators=["+", "-", "*", "/"],
|
|
|
|
| 124 |
select_k_features=5,
|
| 125 |
)
|
| 126 |
```
|
| 127 |
+
|
| 128 |
Now let's fit this:
|
| 129 |
+
|
| 130 |
```python
|
| 131 |
model.fit(X, y)
|
| 132 |
```
|
| 133 |
|
| 134 |
Before the Julia backend is launched, you can see the string:
|
| 135 |
+
|
| 136 |
+
```text
|
| 137 |
Using features ['x3', 'x5', 'x7', 'x19', 'x21']
|
| 138 |
```
|
| 139 |
+
|
| 140 |
which indicates that the feature selection (powered by a gradient-boosting tree)
|
| 141 |
has successfully selected the relevant two features.
|
| 142 |
|
|
|
|
| 163 |
to the input dataset, and predict new targets (which are assumed to be denoised) from that Gaussian process.
|
| 164 |
|
| 165 |
For example:
|
| 166 |
+
|
| 167 |
```python
|
| 168 |
X = np.random.randn(100, 5)
|
| 169 |
noise = np.random.randn(100) * 0.1
|
|
|
|
| 171 |
```
|
| 172 |
|
| 173 |
Let's create and fit a model with the denoising argument set up:
|
| 174 |
+
|
| 175 |
```python
|
| 176 |
model = PySRRegressor(
|
| 177 |
binary_operators=["+", "-", "*", "/"],
|
|
|
|
| 181 |
model.fit(X, y)
|
| 182 |
print(model)
|
| 183 |
```
|
| 184 |
+
|
| 185 |
If all goes well, you should find that it predicts the correct input equation, without the noise term!
|
| 186 |
|
| 187 |
## 7. Additional features
|
| 188 |
|
| 189 |
For the many other features available in PySR, please
|
| 190 |
+
read the [Options section](options.md).
|
docs/generate_papers.py
CHANGED
|
@@ -1,3 +1,4 @@
|
|
|
|
|
| 1 |
import yaml
|
| 2 |
from pathlib import Path
|
| 3 |
|
|
|
|
| 1 |
+
"""This script generates the papers.md file from the papers.yml file."""
|
| 2 |
import yaml
|
| 3 |
from pathlib import Path
|
| 4 |
|
docs/options.md
CHANGED
|
@@ -43,8 +43,9 @@ the equation selection with the arrow shown in the `pick` column.
|
|
| 43 |
|
| 44 |
## Operators
|
| 45 |
|
| 46 |
-
A list of operators can be found on the operators page.
|
| 47 |
One can define custom operators in Julia by passing a string:
|
|
|
|
| 48 |
```python
|
| 49 |
PySRRegressor(niterations=100,
|
| 50 |
binary_operators=["mult", "plus", "special(x, y) = x^2 + y"],
|
|
@@ -107,6 +108,7 @@ on each core.
|
|
| 107 |
Here, we assign weights to each row of data
|
| 108 |
using inverse uncertainty squared. We also use 10 processes for the search
|
| 109 |
instead of the default.
|
|
|
|
| 110 |
```python
|
| 111 |
sigma = ...
|
| 112 |
weights = 1/sigma**2
|
|
@@ -126,8 +128,8 @@ One can warm up the maxsize from a small number to encourage
|
|
| 126 |
PySR to start simple, by using the `warmupMaxsize` argument.
|
| 127 |
This specifies that maxsize increases every `warmupMaxsize`.
|
| 128 |
|
| 129 |
-
|
| 130 |
## Batching
|
|
|
|
| 131 |
One can turn on mini-batching, with the `batching` flag,
|
| 132 |
and control the batch size with `batch_size`. This will make
|
| 133 |
evolution faster for large datasets. Equations are still evaluated
|
|
@@ -151,11 +153,11 @@ There is a "maxsize" parameter to PySR, but there is also an operator-level
|
|
| 151 |
constraints={'pow': (-1, 1), 'mult': (3, 3), 'cos': 5}
|
| 152 |
```
|
| 153 |
|
| 154 |
-
What this says is that: a power law x^y can have an expression of arbitrary (-1) complexity in the x, but only complexity 1 (e.g., a constant or variable) in the y. So (
|
| 155 |
I find this helps a lot for getting more interpretable equations.
|
| 156 |
The other terms say that each multiplication can only have sub-expressions
|
| 157 |
-
of up to complexity 3 (e.g., 5.0 +
|
| 158 |
-
expressions of complexity 5 (e.g., 5.0 +
|
| 159 |
|
| 160 |
## Custom complexity
|
| 161 |
|
|
@@ -182,12 +184,12 @@ You can optionally pass a pandas dataframe to the callable function,
|
|
| 182 |
if you called `.fit` on a pandas dataframe as well.
|
| 183 |
|
| 184 |
There are also some helper functions for doing this quickly.
|
|
|
|
| 185 |
- `model.latex()` will generate a TeX formatted output of your equation.
|
| 186 |
- `model.sympy()` will return the SymPy representation.
|
| 187 |
- `model.jax()` will return a callable JAX function combined with parameters (see below)
|
| 188 |
- `model.pytorch()` will return a PyTorch model (see below).
|
| 189 |
|
| 190 |
-
|
| 191 |
## Exporting to numpy, pytorch, and jax
|
| 192 |
|
| 193 |
By default, the dataframe of equations will contain columns
|
|
@@ -214,21 +216,25 @@ a PyTorch module which runs the equation, using PyTorch functions,
|
|
| 214 |
over `X` (as a PyTorch tensor). This is differentiable, and the
|
| 215 |
parameters of this PyTorch module correspond to the learned parameters
|
| 216 |
in the equation, and are trainable.
|
|
|
|
| 217 |
```python
|
| 218 |
torch_model = model.pytorch()
|
| 219 |
torch_model(X)
|
| 220 |
```
|
|
|
|
| 221 |
**Warning: If you are using custom operators, you must define `extra_torch_mappings` or `extra_jax_mappings` (both are `dict` of callables) to provide an equivalent definition of the functions.** (At any time you can set these parameters or any others with `model.set_params`.)
|
| 222 |
|
| 223 |
For JAX, you can equivalently call `model.jax()`
|
| 224 |
This will return a dictionary containing a `'callable'` (a JAX function),
|
| 225 |
and `'parameters'` (a list of parameters in the equation).
|
| 226 |
You can execute this function with:
|
|
|
|
| 227 |
```python
|
| 228 |
jax_model = model.jax()
|
| 229 |
jax_model['callable'](X, jax_model['parameters'])
|
| 230 |
```
|
| 231 |
-
|
|
|
|
| 232 |
train the parameters within JAX (and is differentiable).
|
| 233 |
|
| 234 |
## `loss`
|
|
@@ -243,29 +249,40 @@ page for SymbolicRegression.jl.
|
|
| 243 |
Here are some additional examples:
|
| 244 |
|
| 245 |
abs(x-y) loss
|
|
|
|
| 246 |
```python
|
| 247 |
PySRRegressor(..., loss="f(x, y) = abs(x - y)^1.5")
|
| 248 |
```
|
|
|
|
| 249 |
Note that the function name doesn't matter:
|
|
|
|
| 250 |
```python
|
| 251 |
PySRRegressor(..., loss="loss(x, y) = abs(x * y)")
|
| 252 |
```
|
|
|
|
| 253 |
With weights:
|
|
|
|
| 254 |
```python
|
| 255 |
model = PySRRegressor(..., loss="myloss(x, y, w) = w * abs(x - y)")
|
| 256 |
model.fit(..., weights=weights)
|
| 257 |
```
|
|
|
|
| 258 |
Weights can be used in arbitrary ways:
|
|
|
|
| 259 |
```python
|
| 260 |
model = PySRRegressor(..., weights=weights, loss="myloss(x, y, w) = abs(x - y)^2/w^2")
|
| 261 |
model.fit(..., weights=weights)
|
| 262 |
```
|
|
|
|
| 263 |
Built-in loss (faster) (see [losses](https://astroautomata.com/SymbolicRegression.jl/dev/losses/)).
|
| 264 |
This one computes the L3 norm:
|
|
|
|
| 265 |
```python
|
| 266 |
PySRRegressor(..., loss="LPDistLoss{3}()")
|
| 267 |
```
|
|
|
|
| 268 |
Can also uses these losses for weighted (weighted-average):
|
|
|
|
| 269 |
```python
|
| 270 |
model = PySRRegressor(..., weights=weights, loss="LPDistLoss{3}()")
|
| 271 |
model.fit(..., weights=weights)
|
|
@@ -278,12 +295,14 @@ when you call `model.fit`, once before the search starts,
|
|
| 278 |
and again after the search finishes. The filename will
|
| 279 |
have the same base name as the input file, but with a `.pkl` extension.
|
| 280 |
You can load the saved model state with:
|
|
|
|
| 281 |
```python
|
| 282 |
model = PySRRegressor.from_file(pickle_filename)
|
| 283 |
```
|
|
|
|
| 284 |
If you have a long-running job and would like to load the model
|
| 285 |
before completion, you can also do this. In this case, the model
|
| 286 |
loading will use the `csv` file to load the equations, since the
|
| 287 |
`csv` file is continually updated during the search. Once
|
| 288 |
the search completes, the model including its equations will
|
| 289 |
-
be saved to the pickle file, overwriting the existing version.
|
|
|
|
| 43 |
|
| 44 |
## Operators
|
| 45 |
|
| 46 |
+
A list of operators can be found on the [operators page](operators.md).
|
| 47 |
One can define custom operators in Julia by passing a string:
|
| 48 |
+
|
| 49 |
```python
|
| 50 |
PySRRegressor(niterations=100,
|
| 51 |
binary_operators=["mult", "plus", "special(x, y) = x^2 + y"],
|
|
|
|
| 108 |
Here, we assign weights to each row of data
|
| 109 |
using inverse uncertainty squared. We also use 10 processes for the search
|
| 110 |
instead of the default.
|
| 111 |
+
|
| 112 |
```python
|
| 113 |
sigma = ...
|
| 114 |
weights = 1/sigma**2
|
|
|
|
| 128 |
PySR to start simple, by using the `warmupMaxsize` argument.
|
| 129 |
This specifies that maxsize increases every `warmupMaxsize`.
|
| 130 |
|
|
|
|
| 131 |
## Batching
|
| 132 |
+
|
| 133 |
One can turn on mini-batching, with the `batching` flag,
|
| 134 |
and control the batch size with `batch_size`. This will make
|
| 135 |
evolution faster for large datasets. Equations are still evaluated
|
|
|
|
| 153 |
constraints={'pow': (-1, 1), 'mult': (3, 3), 'cos': 5}
|
| 154 |
```
|
| 155 |
|
| 156 |
+
What this says is that: a power law $x^y$ can have an expression of arbitrary (-1) complexity in the x, but only complexity 1 (e.g., a constant or variable) in the y. So $(x_0 + 3)^{5.5}$ is allowed, but $5.5^{x_0 + 3}$ is not.
|
| 157 |
I find this helps a lot for getting more interpretable equations.
|
| 158 |
The other terms say that each multiplication can only have sub-expressions
|
| 159 |
+
of up to complexity 3 (e.g., $5.0 + x_2$) in each side, and cosine can only operate on
|
| 160 |
+
expressions of complexity 5 (e.g., $5.0 + x_2 exp(x_3)$).
|
| 161 |
|
| 162 |
## Custom complexity
|
| 163 |
|
|
|
|
| 184 |
if you called `.fit` on a pandas dataframe as well.
|
| 185 |
|
| 186 |
There are also some helper functions for doing this quickly.
|
| 187 |
+
|
| 188 |
- `model.latex()` will generate a TeX formatted output of your equation.
|
| 189 |
- `model.sympy()` will return the SymPy representation.
|
| 190 |
- `model.jax()` will return a callable JAX function combined with parameters (see below)
|
| 191 |
- `model.pytorch()` will return a PyTorch model (see below).
|
| 192 |
|
|
|
|
| 193 |
## Exporting to numpy, pytorch, and jax
|
| 194 |
|
| 195 |
By default, the dataframe of equations will contain columns
|
|
|
|
| 216 |
over `X` (as a PyTorch tensor). This is differentiable, and the
|
| 217 |
parameters of this PyTorch module correspond to the learned parameters
|
| 218 |
in the equation, and are trainable.
|
| 219 |
+
|
| 220 |
```python
|
| 221 |
torch_model = model.pytorch()
|
| 222 |
torch_model(X)
|
| 223 |
```
|
| 224 |
+
|
| 225 |
**Warning: If you are using custom operators, you must define `extra_torch_mappings` or `extra_jax_mappings` (both are `dict` of callables) to provide an equivalent definition of the functions.** (At any time you can set these parameters or any others with `model.set_params`.)
|
| 226 |
|
| 227 |
For JAX, you can equivalently call `model.jax()`
|
| 228 |
This will return a dictionary containing a `'callable'` (a JAX function),
|
| 229 |
and `'parameters'` (a list of parameters in the equation).
|
| 230 |
You can execute this function with:
|
| 231 |
+
|
| 232 |
```python
|
| 233 |
jax_model = model.jax()
|
| 234 |
jax_model['callable'](X, jax_model['parameters'])
|
| 235 |
```
|
| 236 |
+
|
| 237 |
+
Since the parameter list is a jax array, this therefore lets you also
|
| 238 |
train the parameters within JAX (and is differentiable).
|
| 239 |
|
| 240 |
## `loss`
|
|
|
|
| 249 |
Here are some additional examples:
|
| 250 |
|
| 251 |
abs(x-y) loss
|
| 252 |
+
|
| 253 |
```python
|
| 254 |
PySRRegressor(..., loss="f(x, y) = abs(x - y)^1.5")
|
| 255 |
```
|
| 256 |
+
|
| 257 |
Note that the function name doesn't matter:
|
| 258 |
+
|
| 259 |
```python
|
| 260 |
PySRRegressor(..., loss="loss(x, y) = abs(x * y)")
|
| 261 |
```
|
| 262 |
+
|
| 263 |
With weights:
|
| 264 |
+
|
| 265 |
```python
|
| 266 |
model = PySRRegressor(..., loss="myloss(x, y, w) = w * abs(x - y)")
|
| 267 |
model.fit(..., weights=weights)
|
| 268 |
```
|
| 269 |
+
|
| 270 |
Weights can be used in arbitrary ways:
|
| 271 |
+
|
| 272 |
```python
|
| 273 |
model = PySRRegressor(..., weights=weights, loss="myloss(x, y, w) = abs(x - y)^2/w^2")
|
| 274 |
model.fit(..., weights=weights)
|
| 275 |
```
|
| 276 |
+
|
| 277 |
Built-in loss (faster) (see [losses](https://astroautomata.com/SymbolicRegression.jl/dev/losses/)).
|
| 278 |
This one computes the L3 norm:
|
| 279 |
+
|
| 280 |
```python
|
| 281 |
PySRRegressor(..., loss="LPDistLoss{3}()")
|
| 282 |
```
|
| 283 |
+
|
| 284 |
Can also uses these losses for weighted (weighted-average):
|
| 285 |
+
|
| 286 |
```python
|
| 287 |
model = PySRRegressor(..., weights=weights, loss="LPDistLoss{3}()")
|
| 288 |
model.fit(..., weights=weights)
|
|
|
|
| 295 |
and again after the search finishes. The filename will
|
| 296 |
have the same base name as the input file, but with a `.pkl` extension.
|
| 297 |
You can load the saved model state with:
|
| 298 |
+
|
| 299 |
```python
|
| 300 |
model = PySRRegressor.from_file(pickle_filename)
|
| 301 |
```
|
| 302 |
+
|
| 303 |
If you have a long-running job and would like to load the model
|
| 304 |
before completion, you can also do this. In this case, the model
|
| 305 |
loading will use the `csv` file to load the equations, since the
|
| 306 |
`csv` file is continually updated during the search. Once
|
| 307 |
the search completes, the model including its equations will
|
| 308 |
+
be saved to the pickle file, overwriting the existing version.
|