import gradio as gr from ultralytics import YOLO import torch import cv2 import numpy as np from PIL import Image import pandas as pd import os import uuid from datetime import datetime import folium import h3 import base64 # Load YOLO model for tree detection model = YOLO("yolov8n.pt") # Try loading MiDaS depth model try: midas = torch.hub.load("intel-isl/MiDaS", "MiDaS_small", trust_repo=True) midas.to("cpu").eval() midas_transforms = torch.hub.load("intel-isl/MiDaS", "transforms", trust_repo=True).small_transform use_depth = True except Exception as e: print(f"Depth model load failed: {e}") use_depth = False # CSV file setup csv_file = "tree_measurements.csv" if not os.path.exists(csv_file): pd.DataFrame(columns=["Timestamp", "Latitude", "Longitude", "H3_Index", "Height", "Species", "Image_File"]).to_csv(csv_file, index=False) # Dummy classifier def classify_tree_species(image): # Placeholder - returns fixed label return "Generic Tree" # Process function def analyze_tree(image, latitude, longitude): timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") image_np = np.array(image) img_cv = cv2.cvtColor(image_np, cv2.COLOR_RGB2BGR) results = model(img_cv) detections = results[0].boxes.data.cpu().numpy() if len(detections) == 0: return "No tree detected", image, "N/A", generate_map() # First detected tree x1, y1, x2, y2, conf, cls = detections[0] crop = img_cv[int(y1):int(y2), int(x1):int(x2)] tree_crop = Image.fromarray(cv2.cvtColor(crop, cv2.COLOR_BGR2RGB)) # Depth estimation if use_depth: try: input_tensor = midas_transforms(tree_crop).unsqueeze(0) with torch.no_grad(): depth = midas(input_tensor).squeeze().cpu().numpy() approx_height = round(np.max(depth) - np.min(depth), 2) except Exception: approx_height = "Unavailable" else: approx_height = "Unavailable" # Geolocation + H3 index h3_index = h3.geo_to_h3(float(latitude), float(longitude), 9) # Species species = classify_tree_species(tree_crop) # Save image image_id = f"tree_{uuid.uuid4().hex[:8]}.png" tree_crop.save(image_id) # Append to CSV pd.DataFrame([{ "Timestamp": timestamp, "Latitude": latitude, "Longitude": longitude, "H3_Index": h3_index, "Height": approx_height, "Species": species, "Image_File": image_id }]).to_csv(csv_file, mode='a', header=False, index=False) return f"Height: {approx_height} units\nSpecies: {species}", tree_crop, species, generate_map() # Render map def generate_map(): if not os.path.exists(csv_file): return "No map yet." df = pd.read_csv(csv_file) if df.empty: return "No map data." lat, lon = df.iloc[-1][["Latitude", "Longitude"]] fmap = folium.Map(location=[lat, lon], zoom_start=14) for _, row in df.iterrows(): folium.Marker( location=[row["Latitude"], row["Longitude"]], popup=f"{row['Species']} ({row['Height']} units)" ).add_to(fmap) fmap.save("map.html") with open("map.html", "r", encoding="utf-8") as f: return f.read() # Gradio UI with gr.Blocks() as demo: gr.Markdown("## 🌳 Tree Height & Species Estimator with Map & Logger") with gr.Row(): image_input = gr.Image(type="pil", label="πŸ“Έ Tree Image") lat_input = gr.Textbox(label="🌍 Latitude", placeholder="e.g., 12.9716") lon_input = gr.Textbox(label="🌍 Longitude", placeholder="e.g., 77.5946") btn = gr.Button("Analyze Tree") output_text = gr.Textbox(label="πŸ“ Results") output_crop = gr.Image(label="🌲 Detected Tree") output_species = gr.Textbox(label="🌳 Species") map_html = gr.HTML(label="πŸ—ΊοΈ Tree Map") btn.click(analyze_tree, inputs=[image_input, lat_input, lon_input], outputs=[output_text, output_crop, output_species, map_html]) demo.launch()