Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,50 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
tags:
|
4 |
+
- pytorch
|
5 |
+
- neural-network
|
6 |
+
- chaos-theory
|
7 |
+
- logistic-map
|
8 |
+
---
|
9 |
+
# Logistic Map Approximator (Neural Network)
|
10 |
+
|
11 |
+
This model approximates the **logistic map equation**:
|
12 |
+
|
13 |
+
> xₙ₊₁ = r × xₙ × (1 − xₙ)
|
14 |
+
|
15 |
+
It is trained using a simple feedforward neural network to learn chaotic dynamics across different values of `r` ∈ [2.5, 4.0].
|
16 |
+
|
17 |
+
## Model Details
|
18 |
+
|
19 |
+
- **Framework:** PyTorch
|
20 |
+
- **Input:**
|
21 |
+
- `x` ∈ [0, 1]
|
22 |
+
- `r` ∈ [2.5, 4.0]
|
23 |
+
- **Output:** `x_next` (approximation of the next value in sequence)
|
24 |
+
- **Loss Function:** Mean Squared Error (MSE)
|
25 |
+
- **Architecture:** 2 hidden layers (ReLU), trained for 100 epochs
|
26 |
+
|
27 |
+
## Performance
|
28 |
+
|
29 |
+
The model closely approximates `x_next` for a wide range of `r` values, including the chaotic regime.
|
30 |
+
|
31 |
+
Visualization:
|
32 |
+
|
33 |
+

|
34 |
+
|
35 |
+
## Files
|
36 |
+
|
37 |
+
- `logistic_map_approximator.pth`: Trained PyTorch model weights
|
38 |
+
- `mandelbrot.py`: Full training and evaluation code
|
39 |
+
- `README.md`: You're reading it
|
40 |
+
- `example_plot.png`: Comparison of true vs predicted outputs
|
41 |
+
|
42 |
+
## Applications
|
43 |
+
|
44 |
+
- Chaos theory visualizations
|
45 |
+
- Educational tools on non-linear dynamics
|
46 |
+
- Function approximation benchmarking
|
47 |
+
|
48 |
+
## License
|
49 |
+
|
50 |
+
MIT License
|