Redgerd commited on
Commit
d3d079b
·
verified ·
1 Parent(s): ef0220e

Create ReadMe.md

Browse files
Files changed (1) hide show
  1. ReadMe.md +91 -0
ReadMe.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Deepfake Detection Model
2
+
3
+ This repository contains a deepfake detection model built using a combination of a pre-trained Xception network and an LSTM layer. The model is designed to classify videos as either "Real" or "Fake" by analyzing sequences of facial frames extracted from the video.
4
+
5
+ ### Model Architecture
6
+
7
+ The model architecture consists of the following components:
8
+
9
+ 1. **Input Layer**: Takes a sequence of `TIME_STEPS` frames, each resized to `299x299` pixels with 3 color channels. The input shape is `(batch_size, TIME_STEPS, HEIGHT, WIDTH, 3)`.
10
+
11
+ 2. **TimeDistributed Xception**: A pre-trained Xception network (trained on ImageNet) is applied to each frame independently using a `TimeDistributed` wrapper. The `include_top` is set to `False`, and `pooling` is set to `'avg'`, effectively using the Xception network as a feature extractor for each frame. This produces a sequence of feature vectors, one for each frame.
12
+
13
+ 3. **LSTM Layer**: The sequence of feature vectors from the `TimeDistributed Xception` layer is fed into an LSTM (Long Short-Term Memory) layer with `256` hidden units. The LSTM layer is capable of learning temporal dependencies between frames, which is crucial for deepfake detection.
14
+
15
+ 4. **Dropout Layer**: A `Dropout` layer with a rate of `0.5` is applied after the LSTM layer to prevent overfitting.
16
+
17
+ 5. **Output Layer**: A `Dense` layer with `2` units and a `softmax` activation function outputs the probabilities for the two classes: "Real" and "Fake".
18
+
19
+ ### How to Use
20
+
21
+ #### 1\. Setup
22
+
23
+ Clone the repository and install the required libraries:
24
+
25
+ ```bash
26
+ pip install tensorflow opencv-python numpy mtcnn Pillow
27
+ ```
28
+
29
+ #### 2\. Model Loading
30
+
31
+ The model weights are loaded from `COMBINED_best_Phase1.keras`. Ensure this file is accessible at the specified `model_path`.
32
+
33
+ ```python
34
+ model_path = '/content/drive/MyDrive/Dataset DDM/FINAL models/COMBINED_best_Phase1.keras'
35
+ model = build_model() # Architecture defined in the `build_model` function
36
+ model.load_weights(model_path)
37
+ ```
38
+
39
+ #### 3\. Face Extraction and Preprocessing
40
+
41
+ The `extract_faces_from_video` function processes a given video file:
42
+
43
+ * It uses the MTCNN (Multi-task Cascaded Convolutional Networks) for robust face detection in each frame.
44
+ * It samples `TIME_STEPS` frames from the video.
45
+ * For each sampled frame, it detects the primary face, extracts it, and resizes it to `299x299` pixels.
46
+ * The extracted face images are then preprocessed using `preprocess_input` from `tensorflow.keras.applications.xception`, which scales pixel values to the range expected by the Xception model.
47
+ * If no face is detected in a frame, a black image of the same dimensions is used as a placeholder.
48
+ * The function ensures that exactly `TIME_STEPS` frames are returned, padding with the last available frame or black images if necessary.
49
+
50
+ <!-- end list -->
51
+
52
+ ```python
53
+ from mtcnn import MTCNN
54
+ import cv2
55
+ import numpy as np
56
+ from PIL import Image
57
+ from tensorflow.keras.applications.xception import preprocess_input
58
+
59
+ def extract_faces_from_video(video_path, num_frames=30):
60
+ # ... (function implementation as provided in prediction.ipynb)
61
+ pass
62
+
63
+ video_path = '/content/drive/MyDrive/Dataset DDM/FF++/manipulated_sequences/FaceShifter/raw/videos/724_725.mp4'
64
+ video_array = extract_faces_from_video(video_path, num_frames=TIME_STEPS)
65
+ ```
66
+
67
+ #### 4\. Prediction
68
+
69
+ Once the `video_array` (preprocessed frames) is ready, you can make a prediction using the loaded model:
70
+
71
+ ```python
72
+ predictions = model.predict(video_array)
73
+ predicted_class = np.argmax(predictions, axis=1)[0]
74
+ probabilities = predictions[0]
75
+
76
+ class_names = ['Real', 'Fake']
77
+ print(f"Predicted Class: {class_names[predicted_class]}")
78
+ print(f"Class Probabilities: Real: {probabilities[0]:.4f}, Fake: {probabilities[1]:.4f}")
79
+ ```
80
+
81
+ ### Parameters
82
+
83
+ * `TIME_STEPS`: Number of frames to extract from each video (default: `30`).
84
+ * `HEIGHT`, `WIDTH`: Dimensions to which each extracted face image is resized (default: `299, 299`).
85
+ * `lstm_hidden_size`: Number of hidden units in the LSTM layer (default: `256`).
86
+ * `dropout_rate`: Dropout rate applied after the LSTM layer (default: `0.5`).
87
+ * `num_classes`: Number of output classes (default: `2` for "Real" and "Fake").
88
+
89
+ ### Development Environment
90
+
91
+ The provided code snippet is written in Python and utilizes `tensorflow` (Keras API), `opencv-python`, `numpy`, `mtcnn`, and `Pillow`. It is designed to be run in an environment with these libraries installed. The paths suggest it was developed using Google Drive, potentially within a Colab environment.