Update README.md
Browse files
README.md
CHANGED
@@ -38,12 +38,12 @@ Not only the mean or quantiles, you can estimate anything about the predictive d
|
|
38 |
|
39 |
The base version is pre-trained on **1 trillion** time points with **128M** parameters. For more information, please refer to this [paper](https://arxiv.org/pdf/2502.00816).
|
40 |
|
|
|
|
|
41 |

|
42 |
|
43 |
**Overall Architecture**: The input time series is divided into patch tokens, which are embedded from the original continuous values. The patch embeddings are fed into a decoder-only Transformer, a stable and speedup version that learns token representations. The model is optimized using our TimeFlow Loss, a parameterized loss function that models per-token probability distribution conditioned on the learned representations, and generates multiple plausible predictions under the flow-matching framework.
|
44 |
|
45 |
-
**Sundial** can be viewed as an **ARMA** model (Auto-Regression and Moving-Average). Transformer learns auto-regressive token representations. Conditioned on them, TimeFlow transforms random noises into non-deterministic predictions.
|
46 |
-
|
47 |
## Quickstart
|
48 |
```
|
49 |
pip install transformers==4.40.1 # Use this version and Python 3.10 for stable compatibility
|
|
|
38 |
|
39 |
The base version is pre-trained on **1 trillion** time points with **128M** parameters. For more information, please refer to this [paper](https://arxiv.org/pdf/2502.00816).
|
40 |
|
41 |
+
**Sundial** can be viewed as an **ARMA** model (Auto-Regression and Moving-Average). Transformer learns auto-regressive token representations. Conditioned on them, TimeFlow transforms random noises into non-deterministic predictions.
|
42 |
+
|
43 |

|
44 |
|
45 |
**Overall Architecture**: The input time series is divided into patch tokens, which are embedded from the original continuous values. The patch embeddings are fed into a decoder-only Transformer, a stable and speedup version that learns token representations. The model is optimized using our TimeFlow Loss, a parameterized loss function that models per-token probability distribution conditioned on the learned representations, and generates multiple plausible predictions under the flow-matching framework.
|
46 |
|
|
|
|
|
47 |
## Quickstart
|
48 |
```
|
49 |
pip install transformers==4.40.1 # Use this version and Python 3.10 for stable compatibility
|