Spaces:
Sleeping
Sleeping
Commit
·
9e2fa2e
1
Parent(s):
08cc004
updated code
Browse files
README.md
CHANGED
@@ -9,4 +9,84 @@ app_file: app.py
|
|
9 |
pinned: false
|
10 |
---
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
pinned: false
|
10 |
---
|
11 |
|
12 |
+
# Object Difference Highlighter
|
13 |
+
|
14 |
+
## Overview
|
15 |
+
The **Object Difference Highlighter** is a computer vision application that identifies and isolates new objects in a scene. Given two images of the same scene—one without an object and another with an added object (taken seconds apart)—the app intelligently detects what has changed while filtering out environmental noise like moving foliage, slight shadows, or minor lighting changes.
|
16 |
+
|
17 |
+
## Key Features
|
18 |
+
- **Intelligent Object Detection**: Identifies real, meaningful changes between two similar images
|
19 |
+
- **Noise Filtering**: Eliminates false positives from environmental variations (wind-moved objects, minor lighting changes)
|
20 |
+
- **Multiple Detection Algorithms**:
|
21 |
+
- **SSIM (Structural Similarity Index Measure)**: Detects structural differences
|
22 |
+
- **Background Subtraction**: Filters out background to focus on foreground changes
|
23 |
+
- **Optical Flow**: Detects motion-based changes between frames
|
24 |
+
- **Comprehensive Visualization**: Six different views to analyze and understand the changes
|
25 |
+
|
26 |
+
## Visual Outputs
|
27 |
+
The application provides six different views arranged in three rows:
|
28 |
+
|
29 |
+
### Row 1: Base Comparison
|
30 |
+
- **Blended Image**: A 50/50 blend of both input images
|
31 |
+
- **Raw Difference Overlay**: Unprocessed pixel differences shown in magenta on the original image
|
32 |
+
|
33 |
+
### Row 2: Algorithm Results
|
34 |
+
- **Highlighted Differences**: Areas identified as changed by the selected algorithm
|
35 |
+
- **Black & White Mask**: Binary mask showing exactly what areas the algorithm detected as changed
|
36 |
+
|
37 |
+
### Row 3: Final Composition
|
38 |
+
- **Composite Image**: Scene from the first image with only the detected object from the second image
|
39 |
+
- **Final Difference Overlay**: Differences between the original scene and the composite image shown in magenta
|
40 |
+
|
41 |
+
## Use Cases
|
42 |
+
- **Photography**: Remove unwanted elements from otherwise perfect shots
|
43 |
+
- **Surveillance**: Detect new objects in monitored areas
|
44 |
+
- **Research**: Analyze scene changes in controlled environments
|
45 |
+
- **Wildlife Monitoring**: Detect animals in natural settings
|
46 |
+
- **Retail/Inventory**: Track object placement and removal
|
47 |
+
|
48 |
+
## Installation & Setup
|
49 |
+
### **1. Clone the Repository**
|
50 |
+
```bash
|
51 |
+
git clone https://github.com/your-username/object-difference-highlighter.git
|
52 |
+
cd object-difference-highlighter
|
53 |
+
```
|
54 |
+
|
55 |
+
### **2. Install Dependencies**
|
56 |
+
Ensure you have Python installed, then run:
|
57 |
+
```bash
|
58 |
+
pip install -r requirements.txt
|
59 |
+
```
|
60 |
+
|
61 |
+
### **3. Run the Application**
|
62 |
+
```bash
|
63 |
+
python app.py
|
64 |
+
```
|
65 |
+
The application will launch in your default web browser via **Gradio**.
|
66 |
+
|
67 |
+
## Usage Guide
|
68 |
+
### **Step 1: Upload Images**
|
69 |
+
- Upload two images:
|
70 |
+
1. **Image Without Object (Scene)**: The base scene without the object of interest
|
71 |
+
2. **Image With Object**: The same scene with the added object
|
72 |
+
|
73 |
+
### **Step 2: Configure Parameters**
|
74 |
+
- **Comparison Method**: Select the algorithm best suited for your images
|
75 |
+
- SSIM works well for most cases
|
76 |
+
- Background Subtraction is good for static cameras
|
77 |
+
- Optical Flow works well for slight camera movements
|
78 |
+
- **Gaussian Blur**: Adjust to reduce noise (higher values for noisier images)
|
79 |
+
- **Thresholding Technique**: When using SSIM, determines how differences are identified
|
80 |
+
|
81 |
+
### **Step 3: Process & Review Results**
|
82 |
+
- Click **Process** to generate all six visualizations
|
83 |
+
- Compare the different views to understand how well the algorithm detected the new object
|
84 |
+
- Use the mask and highlighted differences to evaluate detection quality
|
85 |
+
- The composite image shows the final result with only the detected object added to the scene
|
86 |
+
|
87 |
+
## Dependencies
|
88 |
+
- Python 3.7+
|
89 |
+
- OpenCV
|
90 |
+
- NumPy
|
91 |
+
- Scikit-Image
|
92 |
+
- Gradio
|
app.py
CHANGED
@@ -15,8 +15,33 @@ def background_subtraction(image1, image2):
|
|
15 |
fgmask1 = subtractor.apply(image1)
|
16 |
fgmask2 = subtractor.apply(image2)
|
17 |
diff = cv2.absdiff(fgmask1, fgmask2)
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
|
21 |
def optical_flow(image1, image2):
|
22 |
gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
|
@@ -27,28 +52,39 @@ def optical_flow(image1, image2):
|
|
27 |
hsv[..., 1] = 255
|
28 |
hsv[..., 0] = ang * 180 / np.pi / 2
|
29 |
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
def feature_matching(image1, image2):
|
34 |
-
|
35 |
-
|
36 |
-
kp2, des2 = orb.detectAndCompute(image2, None)
|
37 |
-
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
|
38 |
-
matches = bf.match(des1, des2)
|
39 |
-
matches = sorted(matches, key=lambda x: x.distance)
|
40 |
-
result = cv2.drawMatches(image1, kp1, image2, kp2, matches[:20], None, flags=cv2.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS)
|
41 |
-
unchanged = cv2.addWeighted(image1, 0.7, image2, 0.3, 0)
|
42 |
-
return result, unchanged
|
43 |
|
44 |
-
def
|
45 |
-
if method == "Background Subtraction":
|
46 |
-
return background_subtraction(image1, image2)
|
47 |
-
elif method == "Optical Flow":
|
48 |
-
return optical_flow(image1, image2)
|
49 |
-
elif method == "Feature Matching":
|
50 |
-
return feature_matching(image1, image2)
|
51 |
-
|
52 |
gray1 = preprocess_image(image1, blur_value)
|
53 |
gray2 = preprocess_image(image2, blur_value)
|
54 |
score, diff = ssim(gray1, gray2, full=True)
|
@@ -63,20 +99,45 @@ def compare_images(image1, image2, blur_value, technique, threshold_value, metho
|
|
63 |
|
64 |
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
|
65 |
filtered_contours = [cnt for cnt in contours if cv2.contourArea(cnt) > 500]
|
66 |
-
mask = np.zeros_like(
|
67 |
-
cv2.drawContours(mask, filtered_contours, -1,
|
68 |
-
|
|
|
|
|
69 |
|
|
|
70 |
diff_colored = cv2.absdiff(image1, image2)
|
71 |
diff_colored[:, :, 0] = 0 # Remove blue
|
72 |
diff_colored[:, :, 1] = 0 # Remove green
|
|
|
73 |
|
74 |
-
|
|
|
75 |
|
76 |
-
|
77 |
-
|
|
|
|
|
|
|
|
|
78 |
|
79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
|
81 |
def update_threshold_visibility(technique):
|
82 |
return gr.update(visible=(technique == "Simple Binary"))
|
@@ -85,7 +146,7 @@ with gr.Blocks() as demo:
|
|
85 |
gr.Markdown("# Object Difference Highlighter\nUpload two images: one without an object and one with an object. The app will highlight only the newly added object and show the real differences in magenta overlayed on the original image.")
|
86 |
|
87 |
with gr.Row():
|
88 |
-
img1 = gr.Image(type="numpy", label="Image Without Object")
|
89 |
img2 = gr.Image(type="numpy", label="Image With Object")
|
90 |
|
91 |
blur_slider = gr.Slider(minimum=1, maximum=15, step=1, value=5, label="Gaussian Blur")
|
@@ -95,15 +156,22 @@ with gr.Blocks() as demo:
|
|
95 |
|
96 |
technique_dropdown.change(update_threshold_visibility, inputs=[technique_dropdown], outputs=[threshold_slider])
|
97 |
|
|
|
98 |
with gr.Row():
|
99 |
-
output1 = gr.Image(type="numpy", label="
|
100 |
output2 = gr.Image(type="numpy", label="Raw Difference Overlay (Magenta)")
|
101 |
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
with gr.Row():
|
103 |
-
|
104 |
-
|
105 |
|
106 |
btn = gr.Button("Process")
|
107 |
-
btn.click(compare_images, inputs=[img1, img2, blur_slider, technique_dropdown, threshold_slider, method_dropdown], outputs=[output1, output2, output3, output4])
|
108 |
|
109 |
demo.launch()
|
|
|
15 |
fgmask1 = subtractor.apply(image1)
|
16 |
fgmask2 = subtractor.apply(image2)
|
17 |
diff = cv2.absdiff(fgmask1, fgmask2)
|
18 |
+
|
19 |
+
# Create a binary mask
|
20 |
+
_, mask = cv2.threshold(diff, 30, 255, cv2.THRESH_BINARY)
|
21 |
+
|
22 |
+
# Create highlighted differences
|
23 |
+
highlighted = cv2.bitwise_and(image2, image2, mask=mask)
|
24 |
+
|
25 |
+
# Create raw difference overlay
|
26 |
+
diff_colored = cv2.cvtColor(diff, cv2.COLOR_GRAY2BGR)
|
27 |
+
diff_colored[:, :, 0] = 0 # Remove blue
|
28 |
+
diff_colored[:, :, 1] = 0 # Remove green
|
29 |
+
overlay = cv2.addWeighted(image1, 0.6, diff_colored, 0.6, 0)
|
30 |
+
|
31 |
+
# Create a blended image
|
32 |
+
blended = cv2.addWeighted(image1, 0.5, image2, 0.5, 0)
|
33 |
+
|
34 |
+
# Create a composite using the mask
|
35 |
+
composite = image1.copy()
|
36 |
+
composite[mask > 0] = image2[mask > 0]
|
37 |
+
|
38 |
+
# Create final difference overlay
|
39 |
+
composite_diff = cv2.absdiff(image1, composite)
|
40 |
+
composite_diff[:, :, 0] = 0 # Remove blue
|
41 |
+
composite_diff[:, :, 1] = 0 # Remove green
|
42 |
+
final_overlay = cv2.addWeighted(image1, 0.6, composite_diff, 0.6, 0)
|
43 |
+
|
44 |
+
return blended, overlay, highlighted, mask, composite, final_overlay
|
45 |
|
46 |
def optical_flow(image1, image2):
|
47 |
gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
|
|
|
52 |
hsv[..., 1] = 255
|
53 |
hsv[..., 0] = ang * 180 / np.pi / 2
|
54 |
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
|
55 |
+
|
56 |
+
# Create mask from magnitude
|
57 |
+
mask = cv2.threshold(hsv[..., 2], 30, 255, cv2.THRESH_BINARY)[1].astype(np.uint8)
|
58 |
+
|
59 |
+
# Create highlighted differences
|
60 |
+
highlighted = cv2.bitwise_and(image2, image2, mask=mask)
|
61 |
+
|
62 |
+
# Create raw difference overlay
|
63 |
+
diff_colored = cv2.absdiff(image1, image2)
|
64 |
+
diff_colored[:, :, 0] = 0 # Remove blue
|
65 |
+
diff_colored[:, :, 1] = 0 # Remove green
|
66 |
+
overlay = cv2.addWeighted(image1, 0.6, diff_colored, 0.6, 0)
|
67 |
+
|
68 |
+
# Create a blended image
|
69 |
+
blended = cv2.addWeighted(image1, 0.5, image2, 0.5, 0)
|
70 |
+
|
71 |
+
# Create a composite using the mask
|
72 |
+
composite = image1.copy()
|
73 |
+
composite[mask > 0] = image2[mask > 0]
|
74 |
+
|
75 |
+
# Create final difference overlay
|
76 |
+
composite_diff = cv2.absdiff(image1, composite)
|
77 |
+
composite_diff[:, :, 0] = 0 # Remove blue
|
78 |
+
composite_diff[:, :, 1] = 0 # Remove green
|
79 |
+
final_overlay = cv2.addWeighted(image1, 0.6, composite_diff, 0.6, 0)
|
80 |
+
|
81 |
+
return blended, overlay, highlighted, mask, composite, final_overlay
|
82 |
|
83 |
def feature_matching(image1, image2):
|
84 |
+
# Use SSIM as a fallback for feature matching since the original implementation doesn't give us a good mask
|
85 |
+
return compare_ssim(image1, image2, 5, "Adaptive Threshold", 30)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
86 |
|
87 |
+
def compare_ssim(image1, image2, blur_value, technique, threshold_value):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
88 |
gray1 = preprocess_image(image1, blur_value)
|
89 |
gray2 = preprocess_image(image2, blur_value)
|
90 |
score, diff = ssim(gray1, gray2, full=True)
|
|
|
99 |
|
100 |
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
|
101 |
filtered_contours = [cnt for cnt in contours if cv2.contourArea(cnt) > 500]
|
102 |
+
mask = np.zeros_like(gray1, dtype=np.uint8)
|
103 |
+
cv2.drawContours(mask, filtered_contours, -1, 255, thickness=cv2.FILLED)
|
104 |
+
|
105 |
+
# Create highlighted differences
|
106 |
+
highlighted = cv2.bitwise_and(image2, image2, mask=mask)
|
107 |
|
108 |
+
# Create raw difference overlay
|
109 |
diff_colored = cv2.absdiff(image1, image2)
|
110 |
diff_colored[:, :, 0] = 0 # Remove blue
|
111 |
diff_colored[:, :, 1] = 0 # Remove green
|
112 |
+
overlay = cv2.addWeighted(image1, 0.6, diff_colored, 0.6, 0)
|
113 |
|
114 |
+
# Create a blended image
|
115 |
+
blended = cv2.addWeighted(image1, 0.5, image2, 0.5, 0)
|
116 |
|
117 |
+
# Create a composite using the mask
|
118 |
+
composite = image1.copy()
|
119 |
+
mask_3channel = cv2.cvtColor(mask, cv2.COLOR_GRAY2BGR)
|
120 |
+
masked_obj = cv2.bitwise_and(image2, mask_3channel)
|
121 |
+
masked_bg = cv2.bitwise_and(image1, cv2.bitwise_not(mask_3channel))
|
122 |
+
composite = cv2.add(masked_bg, masked_obj)
|
123 |
|
124 |
+
# Create final difference overlay
|
125 |
+
composite_diff = cv2.absdiff(image1, composite)
|
126 |
+
composite_diff[:, :, 0] = 0 # Remove blue
|
127 |
+
composite_diff[:, :, 1] = 0 # Remove green
|
128 |
+
final_overlay = cv2.addWeighted(image1, 0.6, composite_diff, 0.6, 0)
|
129 |
+
|
130 |
+
return blended, overlay, highlighted, mask, composite, final_overlay
|
131 |
+
|
132 |
+
def compare_images(image1, image2, blur_value, technique, threshold_value, method):
|
133 |
+
if method == "Background Subtraction":
|
134 |
+
return background_subtraction(image1, image2)
|
135 |
+
elif method == "Optical Flow":
|
136 |
+
return optical_flow(image1, image2)
|
137 |
+
elif method == "Feature Matching":
|
138 |
+
return feature_matching(image1, image2)
|
139 |
+
else: # SSIM
|
140 |
+
return compare_ssim(image1, image2, blur_value, technique, threshold_value)
|
141 |
|
142 |
def update_threshold_visibility(technique):
|
143 |
return gr.update(visible=(technique == "Simple Binary"))
|
|
|
146 |
gr.Markdown("# Object Difference Highlighter\nUpload two images: one without an object and one with an object. The app will highlight only the newly added object and show the real differences in magenta overlayed on the original image.")
|
147 |
|
148 |
with gr.Row():
|
149 |
+
img1 = gr.Image(type="numpy", label="Image Without Object (Scene)")
|
150 |
img2 = gr.Image(type="numpy", label="Image With Object")
|
151 |
|
152 |
blur_slider = gr.Slider(minimum=1, maximum=15, step=1, value=5, label="Gaussian Blur")
|
|
|
156 |
|
157 |
technique_dropdown.change(update_threshold_visibility, inputs=[technique_dropdown], outputs=[threshold_slider])
|
158 |
|
159 |
+
# Row 1 - Blend and Raw Difference
|
160 |
with gr.Row():
|
161 |
+
output1 = gr.Image(type="numpy", label="Blended Image")
|
162 |
output2 = gr.Image(type="numpy", label="Raw Difference Overlay (Magenta)")
|
163 |
|
164 |
+
# Row 2 - Algorithmic Differences and Mask
|
165 |
+
with gr.Row():
|
166 |
+
output3 = gr.Image(type="numpy", label="Highlighted Differences")
|
167 |
+
output4 = gr.Image(type="numpy", label="Black & White Mask")
|
168 |
+
|
169 |
+
# Row 3 - Composite and Final Difference
|
170 |
with gr.Row():
|
171 |
+
output5 = gr.Image(type="numpy", label="Composite (Scene + Masked Object)")
|
172 |
+
output6 = gr.Image(type="numpy", label="Final Difference Overlay (Magenta)")
|
173 |
|
174 |
btn = gr.Button("Process")
|
175 |
+
btn.click(compare_images, inputs=[img1, img2, blur_slider, technique_dropdown, threshold_slider, method_dropdown], outputs=[output1, output2, output3, output4, output5, output6])
|
176 |
|
177 |
demo.launch()
|