Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
@@ -1,59 +1,3 @@
|
|
1 |
-
# LSTM Autoencoder for Time Series Anomaly Detection
|
2 |
-
|
3 |
-
[](https://huggingface.co/spaces/rajatsingh0702/LSTMAE)
|
4 |
-
[](https://colab.research.google.com/drive/1h62dcS5nWos4wczenkG8iDKTJiHRZIqk)
|
5 |
-
[](https://opensource.org/licenses/MIT) <!-- Choose your license -->
|
6 |
-
|
7 |
-
This repository contains the implementation of an LSTM (Long Short-Term Memory) Autoencoder for detecting anomalies in time series data. You can either use a pre-trained model provided here or train a new model on your own CSV dataset.
|
8 |
-
|
9 |
-
An interactive demo is available on Hugging Face Spaces, and a Google Colab notebook is provided for experimentation.
|
10 |
-
|
11 |
-
## Key Features
|
12 |
-
|
13 |
-
* **LSTM Autoencoder:** Built using PyTorch.
|
14 |
-
* **Two Modes:**
|
15 |
-
1. **Use Pre-trained Model:** Quickly analyze time series data using the included model.
|
16 |
-
2. **Train on Custom Data:** Upload your own CSV file to train a new LSTM Autoencoder tailored to your specific data.
|
17 |
-
* **Comprehensive Output:** Generates insightful plots and artifacts:
|
18 |
-
* Andrews Curves Plot
|
19 |
-
* Training Loss Curve
|
20 |
-
* Anomaly Score Distribution
|
21 |
-
* Evaluation Curve (e.g., ROC Curve, Precision-Recall Curve, or your custom "ANDRE" curve - *please clarify if "ANDRE" is a custom metric*)
|
22 |
-
* **Downloadable Results:** Packages the trained model, data scalers, and all generated plots into a convenient ZIP file for download.
|
23 |
-
* **Interactive Demo:** Hugging Face Space for easy interaction without local setup.
|
24 |
-
* **Colab Notebook:** Experiment with the code, training, and evaluation in a Google Colab environment.
|
25 |
-
|
26 |
-
## How it Works
|
27 |
-
|
28 |
-
An LSTM Autoencoder is trained on 'normal' time series data.
|
29 |
-
1. The **Encoder** (an LSTM network) learns to compress the input time series into a lower-dimensional latent representation.
|
30 |
-
2. The **Decoder** (another LSTM network) learns to reconstruct the original time series from this latent representation.
|
31 |
-
3. During inference, the model tries to reconstruct new, unseen time series sequences.
|
32 |
-
4. If a sequence is similar to the normal data seen during training, the reconstruction error (the difference between the input and the reconstructed output) will be low.
|
33 |
-
5. If a sequence contains anomalies (patterns not seen during training), the model struggles to reconstruct it accurately, resulting in a high reconstruction error.
|
34 |
-
6. By setting a threshold on the reconstruction error, we can classify sequences as normal or anomalous.
|
35 |
-
|
36 |
-
## Installation (Local Setup)
|
37 |
-
|
38 |
-
1. **Clone the repository:**
|
39 |
-
```bash
|
40 |
-
git clone https://github.com/Rajatsingh24/LSTM-based-Autoencoder.git
|
41 |
-
cd LSTM-based-Autoencoder
|
42 |
-
```
|
43 |
-
2. **Create a virtual environment (recommended):**
|
44 |
-
```bash
|
45 |
-
python -m venv venv
|
46 |
-
source venv/bin/activate # On Windows use `venv\Scripts\activate`
|
47 |
-
```
|
48 |
-
3. **Install dependencies:**
|
49 |
-
*Make sure you have a `requirements.txt` file in your repository.*
|
50 |
-
```bash
|
51 |
-
pip install -r requirements.txt
|
52 |
-
```
|
53 |
-
|
54 |
-
## Usage
|
55 |
-
|
56 |
-
You can interact with the model primarily through the Hugging Face Space or the Colab Notebook.
|
57 |
|
58 |
---
|
59 |
title: LSTMAE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
---
|
3 |
title: LSTMAE
|