import torch import gradio as gr from PIL import Image import numpy as np # Load the YOLOv5 model model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5s.pt') # Adjust if needed # Example function to calculate materials based on detected areas (you can customize the formulas as needed) def calculate_materials(detected_objects, image_width, image_height): materials = { "cement": 0, "bricks": 0, "steel": 0 } # Proportionality factors (simplified for this example, adjust based on real-world data) for obj in detected_objects: # Calculate bounding box area in real-world units (cm or meters, as per the blueprint size) x1, y1, x2, y2 = obj['bbox'] # Coordinates of the bounding box width = (x2 - x1) * image_width # Convert to real-world width height = (y2 - y1) * image_height # Convert to real-world height # Calculate the area (length × width) area = width * height # Simplified area calculation print(f"Detected {obj['name']} with area {area} cm²") # Debugging output if obj['name'] == 'wall': # Example: For 'wall' objects materials['cement'] += area * 0.1 # Cement estimation (in kg) materials['bricks'] += area * 10 # Bricks estimation materials['steel'] += area * 0.05 # Steel estimation if obj['name'] == 'foundation': # Example: For 'foundation' objects materials['cement'] += area * 0.2 # More cement for foundation materials['bricks'] += area * 15 # More bricks for foundation materials['steel'] += area * 0.1 # More steel for foundation return materials # Define the function for image inference def predict_image(image): results = model(image) # Run inference on the input image detected_objects = results.pandas().xywh[0] # Get the detected objects as pandas dataframe # Calculate real