Tastycoole commited on
Commit
b7b2f27
·
1 Parent(s): c3ed611

test base api

Browse files
.gitignore ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .ipynb_checkpoints/sandbox-checkpoint.ipynb
2
+
3
+ auto_evals/
4
+ venv/
5
+ __pycache__/
6
+ .env
7
+ .ipynb_checkpoints
8
+ *ipynb
9
+ .vscode/
10
+
11
+ eval-queue/
12
+ eval-results/
13
+ eval-queue-bk/
14
+ eval-results-bk/
15
+ logs/
16
+
17
+ emissions.csv
Dockerfile ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
2
+ # you will also find guides on how best to write your Dockerfile
3
+
4
+ FROM python:3.9
5
+
6
+ RUN useradd -m -u 1000 user
7
+ USER user
8
+ ENV PATH="/home/user/.local/bin:$PATH"
9
+
10
+ WORKDIR /app
11
+
12
+ COPY --chown=user ./requirements.txt requirements.txt
13
+ RUN pip install --no-cache-dir --upgrade -r requirements.txt
14
+
15
+ COPY --chown=user . /app
16
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,13 +1,71 @@
1
  ---
2
- title: Frugal Ai Challenge
3
- emoji: 🌍
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: streamlit
7
- sdk_version: 1.41.1
8
- app_file: app.py
9
  pinned: false
10
- short_description: test API
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Submission Template
3
+ emoji: 🔥
4
+ colorFrom: yellow
5
+ colorTo: green
6
+ sdk: docker
 
 
7
  pinned: false
 
8
  ---
9
 
10
+
11
+ # Random Baseline Model for Climate Disinformation Classification
12
+
13
+ ## Model Description
14
+
15
+ This is a random baseline model for the Frugal AI Challenge 2024, specifically for the text classification task of identifying climate disinformation. The model serves as a performance floor, randomly assigning labels to text inputs without any learning.
16
+
17
+ ### Intended Use
18
+
19
+ - **Primary intended uses**: Baseline comparison for climate disinformation classification models
20
+ - **Primary intended users**: Researchers and developers participating in the Frugal AI Challenge
21
+ - **Out-of-scope use cases**: Not intended for production use or real-world classification tasks
22
+
23
+ ## Training Data
24
+
25
+ The model uses the QuotaClimat/frugalaichallenge-text-train dataset:
26
+ - Size: ~6000 examples
27
+ - Split: 80% train, 20% test
28
+ - 8 categories of climate disinformation claims
29
+
30
+ ### Labels
31
+ 0. No relevant claim detected
32
+ 1. Global warming is not happening
33
+ 2. Not caused by humans
34
+ 3. Not bad or beneficial
35
+ 4. Solutions harmful/unnecessary
36
+ 5. Science is unreliable
37
+ 6. Proponents are biased
38
+ 7. Fossil fuels are needed
39
+
40
+ ## Performance
41
+
42
+ ### Metrics
43
+ - **Accuracy**: ~12.5% (random chance with 8 classes)
44
+ - **Environmental Impact**:
45
+ - Emissions tracked in gCO2eq
46
+ - Energy consumption tracked in Wh
47
+
48
+ ### Model Architecture
49
+ The model implements a random choice between the 8 possible labels, serving as the simplest possible baseline.
50
+
51
+ ## Environmental Impact
52
+
53
+ Environmental impact is tracked using CodeCarbon, measuring:
54
+ - Carbon emissions during inference
55
+ - Energy consumption during inference
56
+
57
+ This tracking helps establish a baseline for the environmental impact of model deployment and inference.
58
+
59
+ ## Limitations
60
+ - Makes completely random predictions
61
+ - No learning or pattern recognition
62
+ - No consideration of input text
63
+ - Serves only as a baseline reference
64
+ - Not suitable for any real-world applications
65
+
66
+ ## Ethical Considerations
67
+
68
+ - Dataset contains sensitive topics related to climate disinformation
69
+ - Model makes random predictions and should not be used for actual classification
70
+ - Environmental impact is tracked to promote awareness of AI's carbon footprint
71
+ ```
app.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI
2
+ from dotenv import load_dotenv
3
+ from tasks import text, image, audio
4
+
5
+ # Load environment variables
6
+ load_dotenv()
7
+
8
+ app = FastAPI(
9
+ title="Frugal AI Challenge API",
10
+ description="API for the Frugal AI Challenge evaluation endpoints"
11
+ )
12
+
13
+ # Include all routers
14
+ app.include_router(text.router)
15
+ app.include_router(image.router)
16
+ app.include_router(audio.router)
17
+
18
+ @app.get("/")
19
+ async def root():
20
+ return {
21
+ "message": "Welcome to the Frugal AI Challenge API",
22
+ "endpoints": {
23
+ "text": "/text - Text classification task",
24
+ "image": "/image - Image classification task (coming soon)",
25
+ "audio": "/audio - Audio classification task (coming soon)"
26
+ }
27
+ }
requirements.txt ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ fastapi>=0.68.0
2
+ uvicorn>=0.15.0
3
+ codecarbon>=2.3.1
4
+ datasets>=2.14.0
5
+ scikit-learn>=1.0.2
6
+ pydantic>=1.10.0
7
+ python-dotenv>=1.0.0
8
+ gradio>=4.0.0
9
+ requests>=2.31.0
10
+ librosa==0.10.2.post1
tasks/__init__.py ADDED
File without changes
tasks/audio.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter
2
+ from datetime import datetime
3
+ from datasets import load_dataset
4
+ from sklearn.metrics import accuracy_score
5
+ import random
6
+ import os
7
+
8
+ from .utils.evaluation import AudioEvaluationRequest
9
+ from .utils.emissions import tracker, clean_emissions_data, get_space_info
10
+
11
+ from dotenv import load_dotenv
12
+ load_dotenv()
13
+
14
+ router = APIRouter()
15
+
16
+ DESCRIPTION = "Random Baseline"
17
+ ROUTE = "/audio"
18
+
19
+
20
+
21
+ @router.post(ROUTE, tags=["Audio Task"],
22
+ description=DESCRIPTION)
23
+ async def evaluate_audio(request: AudioEvaluationRequest):
24
+ """
25
+ Evaluate audio classification for rainforest sound detection.
26
+
27
+ Current Model: Random Baseline
28
+ - Makes random predictions from the label space (0-1)
29
+ - Used as a baseline for comparison
30
+ """
31
+ # Get space info
32
+ username, space_url = get_space_info()
33
+
34
+ # Define the label mapping
35
+ LABEL_MAPPING = {
36
+ "chainsaw": 0,
37
+ "environment": 1
38
+ }
39
+ # Load and prepare the dataset
40
+ # Because the dataset is gated, we need to use the HF_TOKEN environment variable to authenticate
41
+ dataset = load_dataset(request.dataset_name,token=os.getenv("HF_TOKEN"))
42
+
43
+ # Split dataset
44
+ train_test = dataset["train"].train_test_split(test_size=request.test_size, seed=request.test_seed)
45
+ test_dataset = train_test["test"]
46
+
47
+ # Start tracking emissions
48
+ tracker.start()
49
+ tracker.start_task("inference")
50
+
51
+ #--------------------------------------------------------------------------------------------
52
+ # YOUR MODEL INFERENCE CODE HERE
53
+ # Update the code below to replace the random baseline by your model inference within the inference pass where the energy consumption and emissions are tracked.
54
+ #--------------------------------------------------------------------------------------------
55
+
56
+ # Make random predictions (placeholder for actual model inference)
57
+ true_labels = test_dataset["label"]
58
+ predictions = [random.randint(0, 1) for _ in range(len(true_labels))]
59
+
60
+ #--------------------------------------------------------------------------------------------
61
+ # YOUR MODEL INFERENCE STOPS HERE
62
+ #--------------------------------------------------------------------------------------------
63
+
64
+ # Stop tracking emissions
65
+ emissions_data = tracker.stop_task()
66
+
67
+ # Calculate accuracy
68
+ accuracy = accuracy_score(true_labels, predictions)
69
+
70
+ # Prepare results dictionary
71
+ results = {
72
+ "username": username,
73
+ "space_url": space_url,
74
+ "submission_timestamp": datetime.now().isoformat(),
75
+ "model_description": DESCRIPTION,
76
+ "accuracy": float(accuracy),
77
+ "energy_consumed_wh": emissions_data.energy_consumed * 1000,
78
+ "emissions_gco2eq": emissions_data.emissions * 1000,
79
+ "emissions_data": clean_emissions_data(emissions_data),
80
+ "api_route": ROUTE,
81
+ "dataset_config": {
82
+ "dataset_name": request.dataset_name,
83
+ "test_size": request.test_size,
84
+ "test_seed": request.test_seed
85
+ }
86
+ }
87
+
88
+ return results
tasks/image.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter
2
+ from datetime import datetime
3
+ from datasets import load_dataset
4
+ import numpy as np
5
+ from sklearn.metrics import accuracy_score
6
+ import random
7
+ import os
8
+
9
+ from .utils.evaluation import ImageEvaluationRequest
10
+ from .utils.emissions import tracker, clean_emissions_data, get_space_info
11
+
12
+ from dotenv import load_dotenv
13
+ load_dotenv()
14
+
15
+ router = APIRouter()
16
+
17
+ DESCRIPTION = "Random Baseline"
18
+ ROUTE = "/image"
19
+
20
+ def parse_boxes(annotation_string):
21
+ """Parse multiple boxes from a single annotation string.
22
+ Each box has 5 values: class_id, x_center, y_center, width, height"""
23
+ values = [float(x) for x in annotation_string.strip().split()]
24
+ boxes = []
25
+ # Each box has 5 values
26
+ for i in range(0, len(values), 5):
27
+ if i + 5 <= len(values):
28
+ # Skip class_id (first value) and take the next 4 values
29
+ box = values[i+1:i+5]
30
+ boxes.append(box)
31
+ return boxes
32
+
33
+ def compute_iou(box1, box2):
34
+ """Compute Intersection over Union (IoU) between two YOLO format boxes."""
35
+ # Convert YOLO format (x_center, y_center, width, height) to corners
36
+ def yolo_to_corners(box):
37
+ x_center, y_center, width, height = box
38
+ x1 = x_center - width/2
39
+ y1 = y_center - height/2
40
+ x2 = x_center + width/2
41
+ y2 = y_center + height/2
42
+ return np.array([x1, y1, x2, y2])
43
+
44
+ box1_corners = yolo_to_corners(box1)
45
+ box2_corners = yolo_to_corners(box2)
46
+
47
+ # Calculate intersection
48
+ x1 = max(box1_corners[0], box2_corners[0])
49
+ y1 = max(box1_corners[1], box2_corners[1])
50
+ x2 = min(box1_corners[2], box2_corners[2])
51
+ y2 = min(box1_corners[3], box2_corners[3])
52
+
53
+ intersection = max(0, x2 - x1) * max(0, y2 - y1)
54
+
55
+ # Calculate union
56
+ box1_area = (box1_corners[2] - box1_corners[0]) * (box1_corners[3] - box1_corners[1])
57
+ box2_area = (box2_corners[2] - box2_corners[0]) * (box2_corners[3] - box2_corners[1])
58
+ union = box1_area + box2_area - intersection
59
+
60
+ return intersection / (union + 1e-6)
61
+
62
+ def compute_max_iou(true_boxes, pred_box):
63
+ """Compute maximum IoU between a predicted box and all true boxes"""
64
+ max_iou = 0
65
+ for true_box in true_boxes:
66
+ iou = compute_iou(true_box, pred_box)
67
+ max_iou = max(max_iou, iou)
68
+ return max_iou
69
+
70
+ @router.post(ROUTE, tags=["Image Task"],
71
+ description=DESCRIPTION)
72
+ async def evaluate_image(request: ImageEvaluationRequest):
73
+ """
74
+ Evaluate image classification and object detection for forest fire smoke.
75
+
76
+ Current Model: Random Baseline
77
+ - Makes random predictions for both classification and bounding boxes
78
+ - Used as a baseline for comparison
79
+
80
+ Metrics:
81
+ - Classification accuracy: Whether an image contains smoke or not
82
+ - Object Detection accuracy: IoU (Intersection over Union) for smoke bounding boxes
83
+ """
84
+ # Get space info
85
+ username, space_url = get_space_info()
86
+
87
+ # Load and prepare the dataset
88
+ dataset = load_dataset(request.dataset_name, token=os.getenv("HF_TOKEN"))
89
+
90
+ # Split dataset
91
+ train_test = dataset["train"].train_test_split(test_size=request.test_size, seed=request.test_seed)
92
+ test_dataset = train_test["test"]
93
+
94
+ # Start tracking emissions
95
+ tracker.start()
96
+ tracker.start_task("inference")
97
+
98
+ #--------------------------------------------------------------------------------------------
99
+ # YOUR MODEL INFERENCE CODE HERE
100
+ # Update the code below to replace the random baseline with your model inference
101
+ #--------------------------------------------------------------------------------------------
102
+
103
+ predictions = []
104
+ true_labels = []
105
+ pred_boxes = []
106
+ true_boxes_list = [] # List of lists, each inner list contains boxes for one image
107
+
108
+ for example in test_dataset:
109
+ # Parse true annotation (YOLO format: class_id x_center y_center width height)
110
+ annotation = example.get("annotations", "").strip()
111
+ has_smoke = len(annotation) > 0
112
+ true_labels.append(int(has_smoke))
113
+
114
+ # Make random classification prediction
115
+ pred_has_smoke = random.random() > 0.5
116
+ predictions.append(int(pred_has_smoke))
117
+
118
+ # If there's a true box, parse it and make random box prediction
119
+ if has_smoke:
120
+ # Parse all true boxes from the annotation
121
+ image_true_boxes = parse_boxes(annotation)
122
+ true_boxes_list.append(image_true_boxes)
123
+
124
+ # For baseline, make one random box prediction per image
125
+ # In a real model, you might want to predict multiple boxes
126
+ random_box = [
127
+ random.random(), # x_center
128
+ random.random(), # y_center
129
+ random.random() * 0.5, # width (max 0.5)
130
+ random.random() * 0.5 # height (max 0.5)
131
+ ]
132
+ pred_boxes.append(random_box)
133
+
134
+ #--------------------------------------------------------------------------------------------
135
+ # YOUR MODEL INFERENCE STOPS HERE
136
+ #--------------------------------------------------------------------------------------------
137
+
138
+ # Stop tracking emissions
139
+ emissions_data = tracker.stop_task()
140
+
141
+ # Calculate classification accuracy
142
+ classification_accuracy = accuracy_score(true_labels, predictions)
143
+
144
+ # Calculate mean IoU for object detection (only for images with smoke)
145
+ # For each image, we compute the max IoU between the predicted box and all true boxes
146
+ ious = []
147
+ for true_boxes, pred_box in zip(true_boxes_list, pred_boxes):
148
+ max_iou = compute_max_iou(true_boxes, pred_box)
149
+ ious.append(max_iou)
150
+
151
+ mean_iou = float(np.mean(ious)) if ious else 0.0
152
+
153
+ # Prepare results dictionary
154
+ results = {
155
+ "username": username,
156
+ "space_url": space_url,
157
+ "submission_timestamp": datetime.now().isoformat(),
158
+ "model_description": DESCRIPTION,
159
+ "classification_accuracy": float(classification_accuracy),
160
+ "mean_iou": mean_iou,
161
+ "energy_consumed_wh": emissions_data.energy_consumed * 1000,
162
+ "emissions_gco2eq": emissions_data.emissions * 1000,
163
+ "emissions_data": clean_emissions_data(emissions_data),
164
+ "api_route": ROUTE,
165
+ "dataset_config": {
166
+ "dataset_name": request.dataset_name,
167
+ "test_size": request.test_size,
168
+ "test_seed": request.test_seed
169
+ }
170
+ }
171
+
172
+ return results
tasks/text.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import APIRouter
2
+ from datetime import datetime
3
+ from datasets import load_dataset
4
+ from sklearn.metrics import accuracy_score
5
+ import random
6
+
7
+ from .utils.evaluation import TextEvaluationRequest
8
+ from .utils.emissions import tracker, clean_emissions_data, get_space_info
9
+
10
+ router = APIRouter()
11
+
12
+ DESCRIPTION = "Random Baseline"
13
+ ROUTE = "/text"
14
+
15
+ @router.post(ROUTE, tags=["Text Task"],
16
+ description=DESCRIPTION)
17
+ async def evaluate_text(request: TextEvaluationRequest):
18
+ """
19
+ Evaluate text classification for climate disinformation detection.
20
+
21
+ Current Model: Random Baseline
22
+ - Makes random predictions from the label space (0-7)
23
+ - Used as a baseline for comparison
24
+ """
25
+ # Get space info
26
+ username, space_url = get_space_info()
27
+
28
+ # Define the label mapping
29
+ LABEL_MAPPING = {
30
+ "0_not_relevant": 0,
31
+ "1_not_happening": 1,
32
+ "2_not_human": 2,
33
+ "3_not_bad": 3,
34
+ "4_solutions_harmful_unnecessary": 4,
35
+ "5_science_unreliable": 5,
36
+ "6_proponents_biased": 6,
37
+ "7_fossil_fuels_needed": 7
38
+ }
39
+
40
+ # Load and prepare the dataset
41
+ dataset = load_dataset(request.dataset_name)
42
+
43
+ # Convert string labels to integers
44
+ dataset = dataset.map(lambda x: {"label": LABEL_MAPPING[x["label"]]})
45
+
46
+ # Split dataset
47
+ train_test = dataset["train"].train_test_split(test_size=request.test_size, seed=request.test_seed)
48
+ test_dataset = train_test["test"]
49
+
50
+ # Start tracking emissions
51
+ tracker.start()
52
+ tracker.start_task("inference")
53
+
54
+ #--------------------------------------------------------------------------------------------
55
+ # YOUR MODEL INFERENCE CODE HERE
56
+ # Update the code below to replace the random baseline by your model inference within the inference pass where the energy consumption and emissions are tracked.
57
+ #--------------------------------------------------------------------------------------------
58
+
59
+ # Make random predictions (placeholder for actual model inference)
60
+ true_labels = test_dataset["label"]
61
+ predictions = [random.randint(0, 7) for _ in range(len(true_labels))]
62
+
63
+ #--------------------------------------------------------------------------------------------
64
+ # YOUR MODEL INFERENCE STOPS HERE
65
+ #--------------------------------------------------------------------------------------------
66
+
67
+
68
+ # Stop tracking emissions
69
+ emissions_data = tracker.stop_task()
70
+
71
+ # Calculate accuracy
72
+ accuracy = accuracy_score(true_labels, predictions)
73
+
74
+ # Prepare results dictionary
75
+ results = {
76
+ "username": username,
77
+ "space_url": space_url,
78
+ "submission_timestamp": datetime.now().isoformat(),
79
+ "model_description": DESCRIPTION,
80
+ "accuracy": float(accuracy),
81
+ "energy_consumed_wh": emissions_data.energy_consumed * 1000,
82
+ "emissions_gco2eq": emissions_data.emissions * 1000,
83
+ "emissions_data": clean_emissions_data(emissions_data),
84
+ "api_route": ROUTE,
85
+ "dataset_config": {
86
+ "dataset_name": request.dataset_name,
87
+ "test_size": request.test_size,
88
+ "test_seed": request.test_seed
89
+ }
90
+ }
91
+
92
+ return results
tasks/utils/__init__.py ADDED
File without changes
tasks/utils/emissions.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from codecarbon import EmissionsTracker
2
+ import os
3
+
4
+ # Initialize tracker
5
+ tracker = EmissionsTracker(allow_multiple_runs=True)
6
+
7
+ class EmissionsData:
8
+ def __init__(self, energy_consumed: float, emissions: float):
9
+ self.energy_consumed = energy_consumed
10
+ self.emissions = emissions
11
+
12
+ def clean_emissions_data(emissions_data):
13
+ """Remove unwanted fields from emissions data"""
14
+ data_dict = emissions_data.__dict__
15
+ fields_to_remove = ['timestamp', 'project_name', 'experiment_id', 'latitude', 'longitude']
16
+ return {k: v for k, v in data_dict.items() if k not in fields_to_remove}
17
+
18
+ def get_space_info():
19
+ """Get the space username and URL from environment variables"""
20
+ space_name = os.getenv("SPACE_ID", "")
21
+ if space_name:
22
+ try:
23
+ username = space_name.split("/")[0]
24
+ space_url = f"https://huggingface.co/spaces/{space_name}"
25
+ return username, space_url
26
+ except Exception as e:
27
+ print(f"Error getting space info: {e}")
28
+ return "local-user", "local-development"
tasks/utils/evaluation.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+ from pydantic import BaseModel, Field
3
+
4
+ class BaseEvaluationRequest(BaseModel):
5
+ test_size: float = Field(0.2, ge=0.0, le=1.0, description="Size of the test split (between 0 and 1)")
6
+ test_seed: int = Field(42, ge=0, description="Random seed for reproducibility")
7
+
8
+ class TextEvaluationRequest(BaseEvaluationRequest):
9
+ dataset_name: str = Field("QuotaClimat/frugalaichallenge-text-train",
10
+ description="The name of the dataset on HuggingFace Hub")
11
+
12
+ class ImageEvaluationRequest(BaseEvaluationRequest):
13
+ dataset_name: str = Field("pyronear/pyro-sdis",
14
+ description="The name of the dataset on HuggingFace Hub")
15
+
16
+ class AudioEvaluationRequest(BaseEvaluationRequest):
17
+ dataset_name: str = Field("rfcx/frugalai",
18
+ description="The name of the dataset on HuggingFace Hub")