Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -19,7 +19,7 @@ TELEGRAM_CHAT_ID = "-1002497861230"
|
|
19 |
|
20 |
client = genai.Client(api_key=GOOGLE_API_KEY)
|
21 |
|
22 |
-
# Prompt de base
|
23 |
BASE_PROMPT = r"""
|
24 |
# 🔍 GÉNÉRATEUR DE CORRECTION MATHÉMATIQUE (Version Directe)
|
25 |
|
@@ -27,268 +27,159 @@ BASE_PROMPT = r"""
|
|
27 |
Vous êtes **Mariam-MATHEX-PRO**, un expert en mathématiques chargé de fournir des corrections. Votre objectif est d'être clair, précis et d'aller droit au but.
|
28 |
|
29 |
## 📊 FORMAT D'ENTRÉE ET SORTIE
|
30 |
-
|
31 |
-
**ENTRÉE:** L'énoncé d'un exercice mathématique (niveau Terminale/Supérieur).
|
32 |
-
|
33 |
**SORTIE:** UNIQUEMENT la correction de l'exercice **en français** avec rendu LaTeX.
|
34 |
|
35 |
## 🛠️ INSTRUCTIONS POUR LA CORRECTION
|
|
|
|
|
|
|
36 |
|
37 |
-
|
38 |
-
|
|
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
* Ne sautez pas d'étapes de calcul cruciales pour la compréhension. Écrivez les calculs intermédiaires importants.
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
* Une seule idée principale ou étape de calcul par segment de texte.
|
47 |
-
|
48 |
-
4. **RÉSULTATS :**
|
49 |
-
* Indiquez clairement les résultats intermédiaires si pertinent, et énoncez distinctement le résultat final de chaque question ou sous-question.
|
50 |
|
51 |
## 🔧 RENDU MATHÉMATIQUE
|
52 |
-
|
53 |
-
|
54 |
-
* Utilisez le rendu LaTeX pour toutes les expressions mathématiques, équations et formules.
|
55 |
-
* Formatez correctement les calculs avec la syntaxe LaTeX appropriée.
|
56 |
|
57 |
## ✅ OBJECTIF PRINCIPAL
|
58 |
-
Fournir une correction mathématique textuelle **en français** qui va
|
59 |
"""
|
60 |
|
61 |
-
# Extension du prompt
|
62 |
CODE_EXTENSION = r"""
|
63 |
-
|
64 |
## 🧮 EXIGENCES TECHNIQUES (MODE CALCULATRICE ACTIVÉ)
|
65 |
|
66 |
-
6.
|
67 |
-
|
68 |
-
* Pour chaque figure ou graphique : générez-le avec Python, sauvegardez-le comme fichier image, puis affichez l'image.
|
69 |
-
* Intégrez le code Python dans la correction pour montrer la démarche de calcul.
|
70 |
-
* Utilisez des bibliothèques comme numpy, matplotlib, sympy selon les besoins.
|
71 |
|
72 |
-
7.
|
73 |
-
|
74 |
-
* Créez des visualisations graphiques pour illustrer les concepts mathématiques.
|
75 |
"""
|
76 |
|
77 |
class AgentSystem:
|
78 |
def __init__(self):
|
79 |
self.prompts_dir = Path("prompts")
|
80 |
self.prompts = self.load_prompts()
|
81 |
-
|
82 |
def load_prompts(self):
|
83 |
-
"""Load all agent prompts from files"""
|
84 |
prompts = {}
|
85 |
try:
|
86 |
-
# Create prompts directory if it doesn't exist
|
87 |
self.prompts_dir.mkdir(exist_ok=True)
|
88 |
-
|
89 |
-
# Define default prompts if files don't exist
|
90 |
-
default_prompts = {
|
91 |
-
"step1_initial_solution.md": """### Core Instructions ###
|
92 |
-
|
93 |
-
* **Rigor is Paramount:** Your primary goal is to produce a complete and rigorously justified solution. Every step in your solution must be logically sound and clearly explained. A correct final answer derived from flawed or incomplete reasoning is considered a failure.
|
94 |
-
* **Honesty About Completeness:** If you cannot find a complete solution, you must **not** guess or create a solution that appears correct but contains hidden flaws or justification gaps. Instead, you should present only significant partial results that you can rigorously prove. A partial result is considered significant if it represents a substantial advancement toward a full solution. Examples include:
|
95 |
-
* Proving a key lemma.
|
96 |
-
* Fully resolving one or more cases within a logically sound case-based proof.
|
97 |
-
* **Use TeX for All Mathematics:** All mathematical variables, expressions, and relations must be enclosed in TeX delimiters (e.g., 'Let $n$ be an integer.').
|
98 |
-
|
99 |
-
### Output Format ###
|
100 |
-
|
101 |
-
Your response MUST be structured into the following sections, in this exact order.
|
102 |
-
|
103 |
-
**1. Summary**
|
104 |
-
|
105 |
-
Provide a concise overview of your findings. This section must contain two parts:
|
106 |
-
|
107 |
-
* **a. Verdict:** State clearly whether you have found a complete solution or a partial solution.
|
108 |
-
* **For a complete solution:** State the final answer, e.g., "I have successfully solved the problem. The final answer is..."
|
109 |
-
* **For a partial solution:** State the main rigorous conclusion(s) you were able to prove, e.g., "I have not found a complete solution, but I have rigorously proven that..."
|
110 |
-
* **b. Method Sketch:** Present a high-level, conceptual outline of your solution. This sketch should allow an expert to understand the logical flow of your argument without reading the full detail. It should include:
|
111 |
-
* A narrative of your overall strategy.
|
112 |
-
* The full and precise mathematical statements of any key lemmas or major intermediate results.
|
113 |
-
* If applicable, describe any key constructions or case splits that form the backbone of your argument.
|
114 |
-
|
115 |
-
**2. Detailed Solution**
|
116 |
-
|
117 |
-
Present the full, step-by-step mathematical proof. Each step must be logically justified and clearly explained. The level of detail should be sufficient for an expert to verify the correctness of your reasoning without needing to fill in any gaps. This section must contain ONLY the complete, rigorous proof, free of any internal commentary, alternative approaches, or failed attempts.
|
118 |
-
|
119 |
-
### Self-Correction Instruction ###
|
120 |
-
|
121 |
-
Before finalizing your output, carefully review your "Method Sketch" and "Detailed Solution" to ensure they are clean, rigorous, and strictly adhere to all instructions provided above. Verify that every statement contributes directly to the final, coherent mathematical argument.
|
122 |
|
|
|
|
|
|
|
123 |
### Problem ###
|
124 |
[The mathematical problem will be inserted here]""",
|
125 |
|
126 |
-
"step2_self_improvement.md": """You are a world-class mathematician.
|
127 |
-
|
128 |
-
|
129 |
|
130 |
### Draft Solution ###
|
131 |
[The initial solution attempt will be inserted here]
|
132 |
|
133 |
### Your Task ###
|
134 |
-
Provide the improved
|
135 |
-
|
136 |
-
"step3_verification.md": """You are an expert mathematician and a meticulous grader for an International Mathematical Olympiad (IMO) level exam. Your primary task is to rigorously verify the provided mathematical solution. A solution is to be judged correct **only if every step is rigorously justified.** A solution that arrives at a correct final answer through flawed reasoning, educated guesses, or with gaps in its arguments must be flagged as incorrect or incomplete.
|
137 |
-
|
138 |
-
### Instructions ###
|
139 |
-
|
140 |
-
**1. Core Instructions**
|
141 |
-
* Your sole task is to find and report all issues in the provided solution. You must act as a **verifier**, NOT a solver. **Do NOT attempt to correct the errors or fill the gaps you find.**
|
142 |
-
* You must perform a **step-by-step** check of the entire solution. This analysis will be presented in a **Detailed Verification Log**, where you justify your assessment of each step: for correct steps, a brief justification suffices; for steps with errors or gaps, you must provide a detailed explanation.
|
143 |
-
|
144 |
-
**2. How to Handle Issues in the Solution**
|
145 |
-
When you identify an issue in a step, you MUST first classify it into one of the following two categories and then follow the specified procedure.
|
146 |
-
|
147 |
-
* **a. Critical Error:**
|
148 |
-
This is any error that breaks the logical chain of the proof. This includes both **logical fallacies** (e.g., claiming that 'A>B, C>D' implies 'A-C>B-D') and **factual errors** (e.g., a calculation error like '2+3=6').
|
149 |
-
* **Procedure:**
|
150 |
-
* Explain the specific error and state that it **invalidates the current line of reasoning**.
|
151 |
-
* Do NOT check any further steps that rely on this error.
|
152 |
-
* You MUST, however, scan the rest of the solution to identify and verify any fully independent parts. For example, if a proof is split into multiple cases, an error in one case does not prevent you from checking the other cases.
|
153 |
-
|
154 |
-
* **b. Justification Gap:**
|
155 |
-
This is for steps where the conclusion may be correct, but the provided argument is incomplete, hand-wavy, or lacks sufficient rigor.
|
156 |
-
* **Procedure:**
|
157 |
-
* Explain the gap in the justification.
|
158 |
-
* State that you will **assume the step's conclusion is true** for the sake of argument.
|
159 |
-
* Then, proceed to verify all subsequent steps to check if the remainder of the argument is sound.
|
160 |
-
|
161 |
-
**3. Output Format**
|
162 |
-
Your response MUST be structured into two main sections: a **Summary** followed by the **Detailed Verification Log**.
|
163 |
|
164 |
-
|
165 |
-
|
166 |
-
* **Final Verdict:** A single, clear sentence declaring the overall validity of the solution. For example: "The solution is correct," "The solution contains a Critical Error and is therefore invalid," or "The solution's approach is viable but contains several Justification Gaps."
|
167 |
-
* **List of Findings:** A bulleted list that summarizes **every** issue you discovered. For each finding, you must provide:
|
168 |
-
* **Location:** A direct quote of the key phrase or equation where the issue occurs.
|
169 |
-
* **Issue:** A brief description of the problem and its classification (**Critical Error** or **Justification Gap**).
|
170 |
|
171 |
-
* **b. Detailed Verification Log**
|
172 |
-
Following the summary, provide the full, step-by-step verification log as defined in the Core Instructions. When you refer to a specific part of the solution, **quote the relevant text** to make your reference clear before providing your detailed analysis of that part.
|
173 |
-
|
174 |
-
---
|
175 |
### Problem ###
|
176 |
[The mathematical problem will be inserted here]
|
177 |
|
178 |
-
---
|
179 |
### Solution ###
|
180 |
[The solution to be verified will be inserted here]
|
181 |
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
|
186 |
-
"step5_correction.md": """You are a brilliant mathematician attempting to solve a difficult problem.
|
187 |
-
Your previous solution attempt has been reviewed by a verifier, who produced the report below.
|
188 |
-
Your task is to write a new, corrected version of your solution that meticulously addresses all the issues raised in the report.
|
189 |
|
190 |
-
### Verification Report
|
191 |
[The full verification report will be inserted here]
|
192 |
|
193 |
-
###
|
194 |
[The previous solution attempt will be inserted here]
|
195 |
|
196 |
-
###
|
197 |
-
Provide a new
|
|
|
198 |
}
|
199 |
-
|
200 |
-
# Create prompt files if they don't exist
|
201 |
for filename, content in default_prompts.items():
|
202 |
prompt_file = self.prompts_dir / filename
|
203 |
if not prompt_file.exists():
|
204 |
prompt_file.write_text(content, encoding='utf-8')
|
205 |
-
|
206 |
prompts[filename.replace('.md', '')] = content
|
207 |
-
|
208 |
-
# Load existing prompts
|
209 |
for prompt_file in self.prompts_dir.glob("*.md"):
|
210 |
prompts[prompt_file.stem] = prompt_file.read_text(encoding='utf-8')
|
211 |
-
|
212 |
except Exception as e:
|
213 |
print(f"Error loading prompts: {e}")
|
214 |
-
# Fallback to hardcoded prompts
|
215 |
-
prompts = {
|
216 |
-
"step1_initial_solution": default_prompts["step1_initial_solution.md"],
|
217 |
-
"step2_self_improvement": default_prompts["step2_self_improvement.md"],
|
218 |
-
"step3_verification": default_prompts["step3_verification.md"],
|
219 |
-
"step5_correction": default_prompts["step5_correction.md"]
|
220 |
-
}
|
221 |
-
|
222 |
return prompts
|
223 |
-
|
224 |
def extract_problem_text(self, img_str):
|
225 |
-
"""Extract problem text from image using Gemini"""
|
226 |
try:
|
227 |
response = client.models.generate_content(
|
228 |
model="gemini-2.5-flash",
|
229 |
contents=[
|
230 |
{'inline_data': {'mime_type': 'image/png', 'data': img_str}},
|
231 |
-
"Extract the mathematical problem statement from this image. Provide only the problem text in LaTeX
|
232 |
],
|
233 |
config=types.GenerateContentConfig(temperature=0.1)
|
234 |
)
|
235 |
-
|
236 |
problem_text = ""
|
237 |
for part in response.candidates[0].content.parts:
|
238 |
if hasattr(part, 'text') and part.text:
|
239 |
problem_text += part.text
|
240 |
-
|
241 |
return problem_text.strip()
|
242 |
except Exception as e:
|
243 |
print(f"Error extracting problem text: {e}")
|
244 |
return "[Problem extraction failed]"
|
245 |
-
|
246 |
def run_agent_step(self, step_name, prompt, use_calculator=False):
|
247 |
-
"""Run a single agent step"""
|
248 |
try:
|
249 |
config = types.GenerateContentConfig(
|
250 |
temperature=0.3,
|
251 |
thinking_config=types.ThinkingConfig(include_thoughts=True)
|
252 |
)
|
253 |
-
|
254 |
if use_calculator:
|
255 |
config.tools = [types.Tool(code_execution=types.ToolCodeExecution)]
|
256 |
-
|
257 |
response = client.models.generate_content_stream(
|
258 |
model="gemini-2.5-flash",
|
259 |
contents=[prompt],
|
260 |
config=config
|
261 |
)
|
262 |
-
|
263 |
result = ""
|
264 |
for chunk in response:
|
265 |
for part in chunk.candidates[0].content.parts:
|
266 |
if hasattr(part, 'text') and part.text:
|
267 |
result += part.text
|
268 |
-
|
269 |
return result.strip()
|
270 |
-
|
271 |
except Exception as e:
|
272 |
print(f"Error in agent step {step_name}: {e}")
|
273 |
return f"[Error in {step_name}: {str(e)}]"
|
274 |
|
275 |
def send_to_telegram(image_data, caption="Nouvelle image uploadée"):
|
276 |
-
"""Envoie l'image à un chat Telegram spécifié"""
|
277 |
try:
|
278 |
url = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendPhoto"
|
279 |
files = {'photo': ('image.png', image_data)}
|
280 |
data = {'chat_id': TELEGRAM_CHAT_ID, 'caption': caption}
|
281 |
-
|
282 |
response = requests.post(url, files=files, data=data)
|
283 |
-
|
284 |
-
if response.status_code == 200:
|
285 |
-
print("Image envoyée avec succès à Telegram")
|
286 |
-
return True
|
287 |
-
else:
|
288 |
-
print(f"Erreur lors de l'envoi à Telegram: {response.text}")
|
289 |
-
return False
|
290 |
except Exception as e:
|
291 |
-
print(f"Exception
|
292 |
return False
|
293 |
|
294 |
@app.route('/')
|
@@ -298,118 +189,89 @@ def index():
|
|
298 |
@app.route('/solve', methods=['POST'])
|
299 |
def solve():
|
300 |
try:
|
301 |
-
# Récupération des données
|
302 |
image_data = request.files['image'].read()
|
303 |
use_calculator = request.form.get('use_calculator', 'false').lower() == 'true'
|
304 |
use_extended_reasoning = request.form.get('use_extended_reasoning', 'false').lower() == 'true'
|
305 |
-
|
306 |
img = Image.open(io.BytesIO(image_data))
|
307 |
-
|
308 |
-
|
309 |
-
mode_info = []
|
310 |
-
if use_calculator:
|
311 |
-
mode_info.append("calculatrice")
|
312 |
-
if use_extended_reasoning:
|
313 |
-
mode_info.append("réflexion étendue")
|
314 |
-
|
315 |
-
caption = f"Nouvelle image pour résolution ({', '.join(mode_info) if mode_info else 'mode standard'})"
|
316 |
-
send_to_telegram(image_data, caption)
|
317 |
-
|
318 |
-
# Traitement pour Gemini
|
319 |
buffered = io.BytesIO()
|
320 |
img.save(buffered, format="PNG")
|
321 |
img_str = base64.b64encode(buffered.getvalue()).decode()
|
322 |
|
323 |
def generate():
|
324 |
-
mode = 'starting'
|
325 |
try:
|
326 |
if use_extended_reasoning:
|
327 |
-
# Mode réflexion étendue - système d'agents
|
328 |
agent_system = AgentSystem()
|
329 |
-
|
330 |
-
# Étape 0: Extraction
|
331 |
yield f'data: {json.dumps({"mode": "thinking"})}\n\n'
|
332 |
-
yield f'data: {json.dumps({"content": "# 🔍 EXTRACTION DU PROBLÈME\n\nAnalyse de l
|
333 |
-
|
334 |
problem_text = agent_system.extract_problem_text(img_str)
|
335 |
yield f'data: {json.dumps({"content": f"**Problème identifié:**\n{problem_text}\n\n", "type": "text"})}\n\n'
|
336 |
-
|
337 |
-
# Étape 1
|
338 |
-
yield f'data: {json.dumps({"content": "
|
339 |
-
|
340 |
step1_prompt = agent_system.prompts["step1_initial_solution"].replace(
|
341 |
"[The mathematical problem will be inserted here]", problem_text
|
342 |
)
|
343 |
-
|
344 |
initial_solution = agent_system.run_agent_step("step1", step1_prompt, use_calculator)
|
345 |
-
yield f'data: {json.dumps({"content":
|
346 |
-
|
347 |
-
# Étape 2
|
348 |
-
yield f'data: {json.dumps({"content": "
|
349 |
-
|
350 |
step2_prompt = agent_system.prompts["step2_self_improvement"].replace(
|
351 |
"[The initial solution attempt will be inserted here]", initial_solution
|
352 |
)
|
353 |
-
|
354 |
improved_solution = agent_system.run_agent_step("step2", step2_prompt, use_calculator)
|
355 |
-
yield f'data: {json.dumps({"content":
|
356 |
-
|
357 |
-
# Étape 3
|
358 |
-
yield f'data: {json.dumps({"content": "
|
359 |
-
|
360 |
step3_prompt = agent_system.prompts["step3_verification"].replace(
|
361 |
"[The mathematical problem will be inserted here]", problem_text
|
362 |
).replace(
|
363 |
"[The solution to be verified will be inserted here]", improved_solution
|
364 |
)
|
365 |
-
|
366 |
verification_result = agent_system.run_agent_step("step3", step3_prompt, False)
|
367 |
-
yield f'data: {json.dumps({"content":
|
368 |
-
|
369 |
-
# Vérifier si des corrections sont nécessaires
|
370 |
needs_correction = (
|
371 |
-
"Critical Error" in verification_result
|
372 |
-
"Justification Gap" in verification_result
|
373 |
-
"invalid" in verification_result.lower()
|
374 |
)
|
375 |
-
|
376 |
if needs_correction:
|
377 |
-
#
|
378 |
-
yield f'data: {json.dumps({"content": "---\n\n# 🛠️ ÉTAPE 5: CORRECTION DES ERREURS IDENTIFIÉES\n\n", "type": "text"})}\n\n'
|
379 |
-
|
380 |
step5_prompt = agent_system.prompts["step5_correction"].replace(
|
381 |
"[The full verification report will be inserted here]", verification_result
|
382 |
).replace(
|
383 |
"[The previous solution attempt will be inserted here]", improved_solution
|
384 |
)
|
385 |
-
|
386 |
corrected_solution = agent_system.run_agent_step("step5", step5_prompt, use_calculator)
|
387 |
-
yield f'data: {json.dumps({"content": f"{corrected_solution}\n\n", "type": "text"})}\n\n'
|
388 |
-
|
389 |
final_solution = corrected_solution
|
|
|
390 |
else:
|
391 |
-
yield f'data: {json.dumps({"content": "✅ **La solution a passé la vérification avec succès!**\n\n", "type": "text"})}\n\n'
|
392 |
final_solution = improved_solution
|
393 |
-
|
394 |
-
|
395 |
yield f'data: {json.dumps({"mode": "answering"})}\n\n'
|
396 |
yield f'data: {json.dumps({"content": "# 📋 SOLUTION FINALE\n\n", "type": "text"})}\n\n'
|
397 |
yield f'data: {json.dumps({"content": final_solution, "type": "text"})}\n\n'
|
398 |
-
|
399 |
else:
|
400 |
-
# Mode standard
|
401 |
prompt = BASE_PROMPT
|
402 |
if use_calculator:
|
403 |
prompt += CODE_EXTENSION
|
404 |
-
|
405 |
config = types.GenerateContentConfig(
|
406 |
temperature=0.3,
|
407 |
thinking_config=types.ThinkingConfig(include_thoughts=True)
|
408 |
)
|
409 |
-
|
410 |
if use_calculator:
|
411 |
config.tools = [types.Tool(code_execution=types.ToolCodeExecution)]
|
412 |
-
|
413 |
response = client.models.generate_content_stream(
|
414 |
model="gemini-2.5-flash",
|
415 |
contents=[
|
@@ -418,50 +280,24 @@ def solve():
|
|
418 |
],
|
419 |
config=config
|
420 |
)
|
421 |
-
|
422 |
for chunk in response:
|
423 |
for part in chunk.candidates[0].content.parts:
|
424 |
-
|
425 |
-
if hasattr(part, 'thought') and part.thought:
|
426 |
-
if mode != "thinking":
|
427 |
-
yield f'data: {json.dumps({"mode": "thinking"})}\n\n'
|
428 |
-
mode = "thinking"
|
429 |
yield f'data: {json.dumps({"content": part.text, "type": "text"})}\n\n'
|
430 |
-
else:
|
431 |
-
if mode != "answering":
|
432 |
-
yield f'data: {json.dumps({"mode": "answering"})}\n\n'
|
433 |
-
mode = "answering"
|
434 |
-
|
435 |
-
# Gestion des différents types de contenu
|
436 |
-
if hasattr(part, 'text') and part.text is not None:
|
437 |
-
yield f'data: {json.dumps({"content": part.text, "type": "text"})}\n\n'
|
438 |
-
|
439 |
-
if hasattr(part, 'executable_code') and part.executable_code is not None:
|
440 |
-
yield f'data: {json.dumps({"content": part.executable_code.code, "type": "code"})}\n\n'
|
441 |
-
|
442 |
-
if hasattr(part, 'code_execution_result') and part.code_execution_result is not None:
|
443 |
-
yield f'data: {json.dumps({"content": part.code_execution_result.output, "type": "result"})}\n\n'
|
444 |
-
|
445 |
-
if hasattr(part, 'inline_data') and part.inline_data is not None:
|
446 |
-
img_data = base64.b64encode(part.inline_data.data).decode('utf-8')
|
447 |
-
yield f'data: {json.dumps({"content": img_data, "type": "image"})}\n\n'
|
448 |
|
449 |
except Exception as e:
|
450 |
print(f"Error during generation: {e}")
|
451 |
-
yield f'data: {json.dumps({"error": "
|
452 |
|
453 |
return Response(
|
454 |
stream_with_context(generate()),
|
455 |
mimetype='text/event-stream',
|
456 |
-
headers={
|
457 |
-
'Cache-Control': 'no-cache',
|
458 |
-
'X-Accel-Buffering': 'no'
|
459 |
-
}
|
460 |
)
|
461 |
|
462 |
except Exception as e:
|
463 |
print(f"Error in solve endpoint: {e}")
|
464 |
-
return jsonify({'error': '
|
465 |
|
466 |
if __name__ == '__main__':
|
467 |
app.run(debug=True)
|
|
|
19 |
|
20 |
client = genai.Client(api_key=GOOGLE_API_KEY)
|
21 |
|
22 |
+
# Prompt de base
|
23 |
BASE_PROMPT = r"""
|
24 |
# 🔍 GÉNÉRATEUR DE CORRECTION MATHÉMATIQUE (Version Directe)
|
25 |
|
|
|
27 |
Vous êtes **Mariam-MATHEX-PRO**, un expert en mathématiques chargé de fournir des corrections. Votre objectif est d'être clair, précis et d'aller droit au but.
|
28 |
|
29 |
## 📊 FORMAT D'ENTRÉE ET SORTIE
|
30 |
+
**ENTRÉE:** L'énoncé d'un exercice mathématique (niveau Terminale/Supérieur).
|
|
|
|
|
31 |
**SORTIE:** UNIQUEMENT la correction de l'exercice **en français** avec rendu LaTeX.
|
32 |
|
33 |
## 🛠️ INSTRUCTIONS POUR LA CORRECTION
|
34 |
+
1. **STRUCTURATION DE LA RÉPONSE :**
|
35 |
+
Organisez la solution en étapes logiques claires.
|
36 |
+
Si l'exercice comporte plusieurs questions ou parties, traitez-les séquentiellement.
|
37 |
|
38 |
+
2. **DÉTAIL DU PROCÉDÉ DE CALCUL :**
|
39 |
+
Pour chaque étape significative, montrez les calculs.
|
40 |
+
Écrivez les calculs intermédiaires importants.
|
41 |
|
42 |
+
3. **EXPLICATIONS TRÈS BRÈVES :**
|
43 |
+
Chaque étape doit avoir une explication textuelle très concise.
|
|
|
44 |
|
45 |
+
4. **RÉSULTATS :**
|
46 |
+
Indiquez clairement les résultats intermédiaires et le résultat final.
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## 🔧 RENDU MATHÉMATIQUE
|
49 |
+
5. **RENDU MATHÉMATIQUE :**
|
50 |
+
Utilisez LaTeX pour toutes les expressions mathématiques.
|
|
|
|
|
51 |
|
52 |
## ✅ OBJECTIF PRINCIPAL
|
53 |
+
Fournir une correction mathématique textuelle **en français** qui va droit au but.
|
54 |
"""
|
55 |
|
56 |
+
# Extension du prompt
|
57 |
CODE_EXTENSION = r"""
|
|
|
58 |
## 🧮 EXIGENCES TECHNIQUES (MODE CALCULATRICE ACTIVÉ)
|
59 |
|
60 |
+
6. **CALCULS ET FIGURES :**
|
61 |
+
Utilisez Python pour tous les calculs numériques et graphiques.
|
|
|
|
|
|
|
62 |
|
63 |
+
7. **VÉRIFICATION NUMÉRIQUE :**
|
64 |
+
Vérifiez vos calculs analytiques par du numérique en Python.
|
|
|
65 |
"""
|
66 |
|
67 |
class AgentSystem:
|
68 |
def __init__(self):
|
69 |
self.prompts_dir = Path("prompts")
|
70 |
self.prompts = self.load_prompts()
|
71 |
+
|
72 |
def load_prompts(self):
|
|
|
73 |
prompts = {}
|
74 |
try:
|
|
|
75 |
self.prompts_dir.mkdir(exist_ok=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
|
77 |
+
default_prompts = {
|
78 |
+
"step1_initial_solution.md": """### Core Instructions ###
|
79 |
+
* **Rigor is Paramount:** Your primary goal is to produce a complete and rigorously justified solution. ...
|
80 |
### Problem ###
|
81 |
[The mathematical problem will be inserted here]""",
|
82 |
|
83 |
+
"step2_self_improvement.md": """You are a world-class mathematician.
|
84 |
+
You have just produced the following draft solution.
|
85 |
+
Your task is to review it carefully, identify flaws or gaps, and produce a new, improved solution.
|
86 |
|
87 |
### Draft Solution ###
|
88 |
[The initial solution attempt will be inserted here]
|
89 |
|
90 |
### Your Task ###
|
91 |
+
Provide the improved version of the solution.""",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
+
"step3_verification.md": """You are an expert mathematician and a meticulous grader.
|
94 |
+
Your task is to verify the provided solution step by step.
|
|
|
|
|
|
|
|
|
95 |
|
|
|
|
|
|
|
|
|
96 |
### Problem ###
|
97 |
[The mathematical problem will be inserted here]
|
98 |
|
|
|
99 |
### Solution ###
|
100 |
[The solution to be verified will be inserted here]
|
101 |
|
102 |
+
### Task ###
|
103 |
+
Act as an IMO grader. Generate a summary and a detailed verification log.
|
104 |
+
""",
|
105 |
|
106 |
+
"step5_correction.md": """You are a brilliant mathematician attempting to solve a difficult problem.
|
|
|
|
|
107 |
|
108 |
+
### Verification Report ###
|
109 |
[The full verification report will be inserted here]
|
110 |
|
111 |
+
### Previous Solution ###
|
112 |
[The previous solution attempt will be inserted here]
|
113 |
|
114 |
+
### Task ###
|
115 |
+
Provide a new corrected solution that fixes all identified issues.
|
116 |
+
"""
|
117 |
}
|
118 |
+
|
|
|
119 |
for filename, content in default_prompts.items():
|
120 |
prompt_file = self.prompts_dir / filename
|
121 |
if not prompt_file.exists():
|
122 |
prompt_file.write_text(content, encoding='utf-8')
|
|
|
123 |
prompts[filename.replace('.md', '')] = content
|
124 |
+
|
|
|
125 |
for prompt_file in self.prompts_dir.glob("*.md"):
|
126 |
prompts[prompt_file.stem] = prompt_file.read_text(encoding='utf-8')
|
127 |
+
|
128 |
except Exception as e:
|
129 |
print(f"Error loading prompts: {e}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
return prompts
|
131 |
+
|
132 |
def extract_problem_text(self, img_str):
|
|
|
133 |
try:
|
134 |
response = client.models.generate_content(
|
135 |
model="gemini-2.5-flash",
|
136 |
contents=[
|
137 |
{'inline_data': {'mime_type': 'image/png', 'data': img_str}},
|
138 |
+
"Extract the mathematical problem statement from this image. Provide only the problem text in LaTeX."
|
139 |
],
|
140 |
config=types.GenerateContentConfig(temperature=0.1)
|
141 |
)
|
|
|
142 |
problem_text = ""
|
143 |
for part in response.candidates[0].content.parts:
|
144 |
if hasattr(part, 'text') and part.text:
|
145 |
problem_text += part.text
|
|
|
146 |
return problem_text.strip()
|
147 |
except Exception as e:
|
148 |
print(f"Error extracting problem text: {e}")
|
149 |
return "[Problem extraction failed]"
|
150 |
+
|
151 |
def run_agent_step(self, step_name, prompt, use_calculator=False):
|
|
|
152 |
try:
|
153 |
config = types.GenerateContentConfig(
|
154 |
temperature=0.3,
|
155 |
thinking_config=types.ThinkingConfig(include_thoughts=True)
|
156 |
)
|
|
|
157 |
if use_calculator:
|
158 |
config.tools = [types.Tool(code_execution=types.ToolCodeExecution)]
|
|
|
159 |
response = client.models.generate_content_stream(
|
160 |
model="gemini-2.5-flash",
|
161 |
contents=[prompt],
|
162 |
config=config
|
163 |
)
|
|
|
164 |
result = ""
|
165 |
for chunk in response:
|
166 |
for part in chunk.candidates[0].content.parts:
|
167 |
if hasattr(part, 'text') and part.text:
|
168 |
result += part.text
|
|
|
169 |
return result.strip()
|
|
|
170 |
except Exception as e:
|
171 |
print(f"Error in agent step {step_name}: {e}")
|
172 |
return f"[Error in {step_name}: {str(e)}]"
|
173 |
|
174 |
def send_to_telegram(image_data, caption="Nouvelle image uploadée"):
|
|
|
175 |
try:
|
176 |
url = f"https://api.telegram.org/bot{TELEGRAM_BOT_TOKEN}/sendPhoto"
|
177 |
files = {'photo': ('image.png', image_data)}
|
178 |
data = {'chat_id': TELEGRAM_CHAT_ID, 'caption': caption}
|
|
|
179 |
response = requests.post(url, files=files, data=data)
|
180 |
+
return response.status_code == 200
|
|
|
|
|
|
|
|
|
|
|
|
|
181 |
except Exception as e:
|
182 |
+
print(f"Exception Telegram: {e}")
|
183 |
return False
|
184 |
|
185 |
@app.route('/')
|
|
|
189 |
@app.route('/solve', methods=['POST'])
|
190 |
def solve():
|
191 |
try:
|
|
|
192 |
image_data = request.files['image'].read()
|
193 |
use_calculator = request.form.get('use_calculator', 'false').lower() == 'true'
|
194 |
use_extended_reasoning = request.form.get('use_extended_reasoning', 'false').lower() == 'true'
|
195 |
+
|
196 |
img = Image.open(io.BytesIO(image_data))
|
197 |
+
send_to_telegram(image_data, "Nouvelle image reçue")
|
198 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
199 |
buffered = io.BytesIO()
|
200 |
img.save(buffered, format="PNG")
|
201 |
img_str = base64.b64encode(buffered.getvalue()).decode()
|
202 |
|
203 |
def generate():
|
|
|
204 |
try:
|
205 |
if use_extended_reasoning:
|
|
|
206 |
agent_system = AgentSystem()
|
207 |
+
|
208 |
+
# Étape 0: Extraction
|
209 |
yield f'data: {json.dumps({"mode": "thinking"})}\n\n'
|
210 |
+
yield f'data: {json.dumps({"content": "# 🔍 EXTRACTION DU PROBLÈME\n\nAnalyse de l’image pour extraire l’énoncé du problème...\n\n", "type": "text"})}\n\n'
|
211 |
+
|
212 |
problem_text = agent_system.extract_problem_text(img_str)
|
213 |
yield f'data: {json.dumps({"content": f"**Problème identifié:**\n{problem_text}\n\n", "type": "text"})}\n\n'
|
214 |
+
|
215 |
+
# Étape 1
|
216 |
+
yield f'data: {json.dumps({"content": "# 📝 ÉTAPE 1: SOLUTION INITIALE\n\n", "type": "text"})}\n\n'
|
|
|
217 |
step1_prompt = agent_system.prompts["step1_initial_solution"].replace(
|
218 |
"[The mathematical problem will be inserted here]", problem_text
|
219 |
)
|
|
|
220 |
initial_solution = agent_system.run_agent_step("step1", step1_prompt, use_calculator)
|
221 |
+
yield f'data: {json.dumps({"content": initial_solution, "type": "text"})}\n\n'
|
222 |
+
|
223 |
+
# Étape 2
|
224 |
+
yield f'data: {json.dumps({"content": "# 🔧 ÉTAPE 2: AUTO-AMÉLIORATION\n\n", "type": "text"})}\n\n'
|
|
|
225 |
step2_prompt = agent_system.prompts["step2_self_improvement"].replace(
|
226 |
"[The initial solution attempt will be inserted here]", initial_solution
|
227 |
)
|
|
|
228 |
improved_solution = agent_system.run_agent_step("step2", step2_prompt, use_calculator)
|
229 |
+
yield f'data: {json.dumps({"content": improved_solution, "type": "text"})}\n\n'
|
230 |
+
|
231 |
+
# Étape 3
|
232 |
+
yield f'data: {json.dumps({"content": "# ✅ ÉTAPE 3: VÉRIFICATION\n\n", "type": "text"})}\n\n'
|
|
|
233 |
step3_prompt = agent_system.prompts["step3_verification"].replace(
|
234 |
"[The mathematical problem will be inserted here]", problem_text
|
235 |
).replace(
|
236 |
"[The solution to be verified will be inserted here]", improved_solution
|
237 |
)
|
|
|
238 |
verification_result = agent_system.run_agent_step("step3", step3_prompt, False)
|
239 |
+
yield f'data: {json.dumps({"content": verification_result, "type": "text"})}\n\n'
|
240 |
+
|
|
|
241 |
needs_correction = (
|
242 |
+
"Critical Error" in verification_result
|
243 |
+
or "Justification Gap" in verification_result
|
244 |
+
or "invalid" in verification_result.lower()
|
245 |
)
|
246 |
+
|
247 |
if needs_correction:
|
248 |
+
yield f'data: {json.dumps({"content": "# 🛠️ ÉTAPE 5: CORRECTION\n\n", "type": "text"})}\n\n'
|
|
|
|
|
249 |
step5_prompt = agent_system.prompts["step5_correction"].replace(
|
250 |
"[The full verification report will be inserted here]", verification_result
|
251 |
).replace(
|
252 |
"[The previous solution attempt will be inserted here]", improved_solution
|
253 |
)
|
|
|
254 |
corrected_solution = agent_system.run_agent_step("step5", step5_prompt, use_calculator)
|
|
|
|
|
255 |
final_solution = corrected_solution
|
256 |
+
yield f'data: {json.dumps({"content": corrected_solution, "type": "text"})}\n\n'
|
257 |
else:
|
|
|
258 |
final_solution = improved_solution
|
259 |
+
yield f'data: {json.dumps({"content": "✅ La solution a été validée sans correction.\n\n", "type": "text"})}\n\n'
|
260 |
+
|
261 |
yield f'data: {json.dumps({"mode": "answering"})}\n\n'
|
262 |
yield f'data: {json.dumps({"content": "# 📋 SOLUTION FINALE\n\n", "type": "text"})}\n\n'
|
263 |
yield f'data: {json.dumps({"content": final_solution, "type": "text"})}\n\n'
|
264 |
+
|
265 |
else:
|
|
|
266 |
prompt = BASE_PROMPT
|
267 |
if use_calculator:
|
268 |
prompt += CODE_EXTENSION
|
|
|
269 |
config = types.GenerateContentConfig(
|
270 |
temperature=0.3,
|
271 |
thinking_config=types.ThinkingConfig(include_thoughts=True)
|
272 |
)
|
|
|
273 |
if use_calculator:
|
274 |
config.tools = [types.Tool(code_execution=types.ToolCodeExecution)]
|
|
|
275 |
response = client.models.generate_content_stream(
|
276 |
model="gemini-2.5-flash",
|
277 |
contents=[
|
|
|
280 |
],
|
281 |
config=config
|
282 |
)
|
|
|
283 |
for chunk in response:
|
284 |
for part in chunk.candidates[0].content.parts:
|
285 |
+
if hasattr(part, 'text') and part.text:
|
|
|
|
|
|
|
|
|
286 |
yield f'data: {json.dumps({"content": part.text, "type": "text"})}\n\n'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
287 |
|
288 |
except Exception as e:
|
289 |
print(f"Error during generation: {e}")
|
290 |
+
yield f'data: {json.dumps({"error": "Erreur inattendue"})}\n\n'
|
291 |
|
292 |
return Response(
|
293 |
stream_with_context(generate()),
|
294 |
mimetype='text/event-stream',
|
295 |
+
headers={'Cache-Control': 'no-cache', 'X-Accel-Buffering': 'no'}
|
|
|
|
|
|
|
296 |
)
|
297 |
|
298 |
except Exception as e:
|
299 |
print(f"Error in solve endpoint: {e}")
|
300 |
+
return jsonify({'error': 'Erreur inattendue'}), 500
|
301 |
|
302 |
if __name__ == '__main__':
|
303 |
app.run(debug=True)
|