SreekarB commited on
Commit
dab32c8
·
verified ·
1 Parent(s): 861f14f

Update annotated_casl_app.py

Browse files
Files changed (1) hide show
  1. annotated_casl_app.py +813 -82
annotated_casl_app.py CHANGED
@@ -18,8 +18,255 @@ if ANTHROPIC_API_KEY:
18
  else:
19
  logger.warning("Claude API key not found - using demo mode")
20
 
21
- def call_claude_api(prompt):
22
- """Call Claude API for annotation or analysis"""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  if not ANTHROPIC_API_KEY:
24
  return "❌ Claude API key not configured. Please set ANTHROPIC_API_KEY environment variable."
25
 
@@ -59,6 +306,259 @@ def call_claude_api(prompt):
59
  logger.error(f"Error calling Claude API: {str(e)}")
60
  return f"❌ Error: {str(e)}"
61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
  def check_annotation_completeness(original_transcript, annotated_transcript):
63
  """Check if annotation is complete by verifying last 3 words are present"""
64
  import re
@@ -1441,44 +1941,119 @@ with gr.Blocks(title="Speech Analysis", theme=gr.themes.Soft()) as demo:
1441
  show_copy_button=True
1442
  )
1443
 
1444
- with gr.Tab("📊 Analysis Only"):
1445
- gr.Markdown("### Step 2: Analyze pre-annotated transcript")
 
1446
 
1447
  with gr.Row():
1448
  with gr.Column():
1449
- annotated_input = gr.Textbox(
1450
- label="Annotated Transcript",
1451
- placeholder="Paste annotated transcript with [MARKERS] here...",
1452
- lines=10
 
 
 
 
 
 
1453
  )
1454
 
1455
  with gr.Row():
1456
- age_input_3 = gr.Textbox(label="Age", value="45")
1457
- gender_input_3 = gr.Dropdown(
1458
  label="Gender",
1459
  choices=["Male", "Female", "Other"],
1460
  value="Male"
1461
  )
1462
 
1463
- slp_notes_input_3 = gr.Textbox(
1464
  label="SLP Clinical Notes (Optional)",
1465
- lines=3
1466
  )
1467
 
1468
- example_annotated_btn = gr.Button("📄 Load Example Annotated Transcript", variant="secondary", size="sm")
1469
- analyze_only_btn = gr.Button("📊 Analyze Annotated Transcript", variant="secondary")
 
 
 
 
 
 
 
 
 
 
 
 
1470
 
1471
  with gr.Column():
1472
- analysis_only_output = gr.Textbox(
1473
- label="Comprehensive Analysis",
1474
- lines=20,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1475
  show_copy_button=True
1476
  )
1477
 
1478
  # Event handlers - now all components are defined
1479
  example_btn.click(fn=lambda: example_transcript, outputs=[transcript_input])
1480
  example_btn_2.click(fn=lambda: example_transcript, outputs=[transcript_input_2])
1481
- example_annotated_btn.click(fn=lambda: example_annotated, outputs=[annotated_input])
 
 
 
 
 
 
 
 
 
1482
 
1483
  file_input.change(
1484
  fn=process_file,
@@ -1702,52 +2277,213 @@ with gr.Blocks(title="Speech Analysis", theme=gr.themes.Soft()) as demo:
1702
  4. Complete ALL 12 sections of the comprehensive analysis
1703
 
1704
  COMPREHENSIVE SPEECH SAMPLE ANALYSIS:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1705
 
1706
- 1. SPEECH FACTORS (with EXACT verified counts and specific citations):
1707
- A. Fluency Issues: Use the verified counts above, cite specific examples from transcript
1708
- B. Word Retrieval Issues: Use verified counts, analyze patterns with examples
1709
- C. Grammatical Errors: Use verified error counts, categorize with examples
1710
-
1711
- 2. LANGUAGE SKILLS ASSESSMENT (with verified evidence):
1712
- A. Lexical/Semantic Skills: Use verified lexical diversity measures and vocabulary data
1713
- B. Syntactic Skills: Use verified sentence complexity counts and MLU data
1714
- C. Supralinguistic Skills: Clinical interpretation with transcript examples
1715
-
1716
- 3. COMPLEX SENTENCE ANALYSIS (with verified counts):
1717
- A. Coordinating Conjunctions: Count from transcript, use verified sentence data
1718
- B. Subordinating Conjunctions: Count from transcript, analyze complexity
1719
- C. Sentence Structure Analysis: Use verified MLU and sentence type data
1720
-
1721
- 4. FIGURATIVE LANGUAGE ANALYSIS (with verified counts):
1722
- A. Similes and Metaphors: Use verified figurative markers, cite examples
1723
- B. Idioms and Non-literal Language: Analysis with specific examples
1724
-
1725
- 5. PRAGMATIC LANGUAGE ASSESSMENT (with verified examples):
1726
- A. Discourse Management: Use verified pragmatic marker counts
1727
- B. Referential Communication: Use verified pronoun reference data
1728
-
1729
- 6. VOCABULARY AND SEMANTIC ANALYSIS (with verified quantification):
1730
- A. Vocabulary Diversity: Use ALL verified lexical diversity measures (MTLD, HDD, etc.)
1731
- B. Semantic Relationships: Use verified word frequency and sophistication data
1732
-
1733
- 7. MORPHOLOGICAL AND PHONOLOGICAL ANALYSIS (with verified counts):
1734
- A. Morphological Markers: Use verified morphological data and MLU morphemes
1735
- B. Phonological Patterns: Analysis from transcript evidence
1736
-
1737
- 8. COGNITIVE-LINGUISTIC FACTORS (with verified evidence):
1738
- A. Working Memory: Use verified sentence length and complexity data
1739
- B. Processing Efficiency: Use verified fluency and error pattern data
1740
- C. Executive Function: Use verified self-correction patterns
1741
-
1742
- 9. FLUENCY AND RHYTHM ANALYSIS (with verified quantification):
1743
- A. Disfluency Patterns: Use verified fluency counts and rates
1744
- B. Language Flow: Use verified sentence variability data
1745
-
1746
- 10. QUANTITATIVE METRICS (report ALL verified data):
1747
- Report all the verified statistical values provided above
1748
-
1749
- 11. CLINICAL IMPLICATIONS:
1750
- Based on verified data, provide clinical interpretation and recommendations
1751
 
1752
  12. PROGNOSIS AND SUMMARY:
1753
  Overall profile based on comprehensive verified data
@@ -1810,25 +2546,20 @@ with gr.Blocks(title="Speech Analysis", theme=gr.themes.Soft()) as demo:
1810
  outputs=[annotation_output]
1811
  )
1812
 
1813
- def analyze_standalone(annotated_transcript, age, gender, slp_notes):
1814
- """Analyze standalone annotated transcript"""
1815
- # Extract original transcript by removing markers
1816
- original_transcript = annotated_transcript
1817
- for marker in ['[FILLER]', '[FALSE_START]', '[REPETITION]', '[REVISION]', '[PAUSE]',
1818
- '[CIRCUMLOCUTION]', '[INCOMPLETE]', '[GENERIC]', '[WORD_SEARCH]',
1819
- '[GRAM_ERROR]', '[SYNTAX_ERROR]', '[MORPH_ERROR]', '[RUN_ON]',
1820
- '[SIMPLE_VOCAB]', '[COMPLEX_VOCAB]', '[SEMANTIC_ERROR]',
1821
- '[TOPIC_SHIFT]', '[TANGENT]', '[INAPPROPRIATE]', '[COHERENCE_BREAK]',
1822
- '[SIMPLE_SENT]', '[COMPLEX_SENT]', '[COMPOUND_SENT]', '[FIGURATIVE]',
1823
- '[PRONOUN_REF]', '[MAZING]', '[PERSEVERATION]']:
1824
- original_transcript = original_transcript.replace(marker, '')
1825
-
1826
- return analyze_with_backup(annotated_transcript, original_transcript, age, gender, slp_notes)
1827
-
1828
- analyze_only_btn.click(
1829
- fn=analyze_standalone,
1830
- inputs=[annotated_input, age_input_3, gender_input_3, slp_notes_input_3],
1831
- outputs=[analysis_only_output]
1832
  )
1833
 
1834
  if __name__ == "__main__":
 
18
  else:
19
  logger.warning("Claude API key not found - using demo mode")
20
 
21
+ def segment_response_by_sections(response_text):
22
+ """Segment response by section titles and return a dictionary of sections"""
23
+ required_sections = [
24
+ "1. SPEECH FACTORS",
25
+ "2. LANGUAGE SKILLS ASSESSMENT",
26
+ "3. COMPLEX SENTENCE ANALYSIS",
27
+ "4. FIGURATIVE LANGUAGE ANALYSIS",
28
+ "5. PRAGMATIC LANGUAGE ASSESSMENT",
29
+ "6. VOCABULARY AND SEMANTIC ANALYSIS",
30
+ "7. MORPHOLOGICAL AND PHONOLOGICAL ANALYSIS",
31
+ "8. COGNITIVE-LINGUISTIC FACTORS",
32
+ "9. FLUENCY AND RHYTHM ANALYSIS",
33
+ "10. QUANTITATIVE METRICS",
34
+ "11. CLINICAL IMPLICATIONS",
35
+ "12. PROGNOSIS AND SUMMARY"
36
+ ]
37
+
38
+ sections = {}
39
+ current_section = None
40
+ current_content = []
41
+
42
+ lines = response_text.split('\n')
43
+
44
+ for line in lines:
45
+ # Check if this line is a section header
46
+ is_section_header = False
47
+ for section in required_sections:
48
+ if section in line:
49
+ # Save previous section if exists
50
+ if current_section and current_content:
51
+ sections[current_section] = '\n'.join(current_content).strip()
52
+
53
+ # Start new section
54
+ current_section = section
55
+ current_content = []
56
+ is_section_header = True
57
+ break
58
+
59
+ # If not a section header, add to current section content
60
+ if not is_section_header and current_section:
61
+ current_content.append(line)
62
+
63
+ # Save the last section
64
+ if current_section and current_content:
65
+ sections[current_section] = '\n'.join(current_content).strip()
66
+
67
+ return sections
68
+
69
+ def combine_sections_smartly(sections_dict):
70
+ """Combine sections in the correct order without duplicates"""
71
+ required_sections = [
72
+ "1. SPEECH FACTORS",
73
+ "2. LANGUAGE SKILLS ASSESSMENT",
74
+ "3. COMPLEX SENTENCE ANALYSIS",
75
+ "4. FIGURATIVE LANGUAGE ANALYSIS",
76
+ "5. PRAGMATIC LANGUAGE ASSESSMENT",
77
+ "6. VOCABULARY AND SEMANTIC ANALYSIS",
78
+ "7. MORPHOLOGICAL AND PHONOLOGICAL ANALYSIS",
79
+ "8. COGNITIVE-LINGUISTIC FACTORS",
80
+ "9. FLUENCY AND RHYTHM ANALYSIS",
81
+ "10. QUANTITATIVE METRICS",
82
+ "11. CLINICAL IMPLICATIONS",
83
+ "12. PROGNOSIS AND SUMMARY"
84
+ ]
85
+
86
+ combined_parts = []
87
+ combined_parts.append("COMPREHENSIVE SPEECH SAMPLE ANALYSIS")
88
+ combined_parts.append("")
89
+
90
+ for section in required_sections:
91
+ if section in sections_dict:
92
+ combined_parts.append(section)
93
+ combined_parts.append("")
94
+ combined_parts.append(sections_dict[section])
95
+ combined_parts.append("")
96
+
97
+ return '\n'.join(combined_parts)
98
+
99
+ def call_claude_api_with_continuation(prompt, max_continuations=0):
100
+ """Call Claude API with smart continuation system - unlimited continuations until complete"""
101
+ if not ANTHROPIC_API_KEY:
102
+ return "❌ Claude API key not configured. Please set ANTHROPIC_API_KEY environment variable."
103
+
104
+ # Define all required sections
105
+ required_sections = [
106
+ "1. SPEECH FACTORS",
107
+ "2. LANGUAGE SKILLS ASSESSMENT",
108
+ "3. COMPLEX SENTENCE ANALYSIS",
109
+ "4. FIGURATIVE LANGUAGE ANALYSIS",
110
+ "5. PRAGMATIC LANGUAGE ASSESSMENT",
111
+ "6. VOCABULARY AND SEMANTIC ANALYSIS",
112
+ "7. MORPHOLOGICAL AND PHONOLOGICAL ANALYSIS",
113
+ "8. COGNITIVE-LINGUISTIC FACTORS",
114
+ "9. FLUENCY AND RHYTHM ANALYSIS",
115
+ "10. QUANTITATIVE METRICS",
116
+ "11. CLINICAL IMPLICATIONS",
117
+ "12. PROGNOSIS AND SUMMARY"
118
+ ]
119
+
120
+ # Safety limits to prevent infinite loops
121
+ MAX_CONTINUATIONS = 30 # Increased from 20 to 30 API calls
122
+ MAX_TIME_MINUTES = 15 # Increased from 10 to 15 minutes total
123
+ MIN_PROGRESS_PER_CALL = 0 # Changed from 1 to 0 to allow more flexibility
124
+
125
+ try:
126
+ all_sections = {} # Store all sections found across all parts
127
+ continuation_count = 0
128
+ start_time = time.time()
129
+ last_section_count = 0 # Track progress between calls
130
+
131
+ # Add continuation instruction to original prompt
132
+ initial_prompt = prompt + "\n\nCRITICAL INSTRUCTIONS: You MUST complete ALL 12 sections of the analysis. If your response is cut off or incomplete, end with <CONTINUE> to indicate more content is needed. Do not skip any sections. Use the checklist to ensure all sections are completed."
133
+
134
+ while True: # Unlimited continuations until complete
135
+ if continuation_count == 0:
136
+ current_prompt = initial_prompt
137
+ else:
138
+ # For continuations, provide context about what was already covered
139
+ missing_sections = [s for s in required_sections if s not in all_sections]
140
+ missing_text = "\n".join([f"- {section}" for section in missing_sections])
141
+
142
+ current_prompt = prompt + f"\n\nCONTINUATION {continuation_count + 1}: The following sections are STILL MISSING and MUST be completed:\n\n{missing_text}\n\nCRITICAL: Provide ONLY these missing sections. Do not repeat any sections that are already complete. Focus exclusively on the missing sections listed above. Complete ALL missing sections in this response."
143
+
144
+ headers = {
145
+ "Content-Type": "application/json",
146
+ "x-api-key": ANTHROPIC_API_KEY,
147
+ "anthropic-version": "2023-06-01"
148
+ }
149
+
150
+ data = {
151
+ "model": "claude-3-5-sonnet-20241022",
152
+ "max_tokens": 4096,
153
+ "messages": [
154
+ {
155
+ "role": "user",
156
+ "content": current_prompt
157
+ }
158
+ ]
159
+ }
160
+
161
+ response = requests.post(
162
+ "https://api.anthropic.com/v1/messages",
163
+ headers=headers,
164
+ json=data,
165
+ timeout=90
166
+ )
167
+
168
+ if response.status_code == 200:
169
+ response_json = response.json()
170
+ response_text = response_json['content'][0]['text']
171
+
172
+ # Log response for debugging
173
+ print(f"\n=== PART {continuation_count + 1} RESPONSE ===")
174
+ print(f"Length: {len(response_text)} characters")
175
+ print(f"Contains CONTINUE: {'<CONTINUE>' in response_text}")
176
+ print(f"First 200 chars: {response_text[:200]}...")
177
+ print(f"Last 200 chars: {response_text[-200:]}...")
178
+ print("=" * 50)
179
+
180
+ # Segment this part and add new sections to our collection
181
+ part_sections = segment_response_by_sections(response_text)
182
+ for section, content in part_sections.items():
183
+ if section not in all_sections: # Only add if not already present
184
+ all_sections[section] = content
185
+ print(f"Added section: {section}")
186
+ else:
187
+ print(f"Skipped duplicate section: {section}")
188
+
189
+ # Check completion status
190
+ completed_sections = len(all_sections)
191
+ missing_sections = [s for s in required_sections if s not in all_sections]
192
+
193
+ print(f"Completed sections: {completed_sections}/12")
194
+ print(f"Missing sections: {missing_sections}")
195
+
196
+ # Check if response indicates continuation is needed
197
+ needs_continuation = "<CONTINUE>" in response_text
198
+
199
+ print(f"Needs continuation: {needs_continuation}")
200
+ print(f"Continuation count: {continuation_count}")
201
+
202
+ # Safety checks to prevent infinite loops
203
+ current_time = time.time()
204
+ elapsed_minutes = (current_time - start_time) / 60
205
+ current_section_count = len(all_sections)
206
+ progress_made = current_section_count - last_section_count
207
+
208
+ # Check if we're making progress
209
+ if continuation_count > 0 and progress_made < MIN_PROGRESS_PER_CALL:
210
+ # Only stop if we've made multiple calls with no progress
211
+ if continuation_count > 3: # Allow more attempts before giving up
212
+ logger.warning(f"No progress made in last call (added {progress_made} sections). Stopping to prevent infinite loop.")
213
+ break
214
+ else:
215
+ logger.info(f"No progress in call {continuation_count}, but continuing to allow more attempts...")
216
+
217
+ # Check time limit
218
+ if elapsed_minutes > MAX_TIME_MINUTES:
219
+ logger.warning(f"Time limit exceeded ({elapsed_minutes:.1f} minutes). Stopping to prevent excessive API usage.")
220
+ break
221
+
222
+ # Check continuation limit
223
+ if continuation_count >= MAX_CONTINUATIONS:
224
+ logger.warning(f"Continuation limit reached ({MAX_CONTINUATIONS} calls). Stopping to prevent excessive API usage.")
225
+ break
226
+
227
+ # Continue if <CONTINUE> is present and safety checks pass
228
+ if needs_continuation:
229
+ continuation_count += 1
230
+ last_section_count = current_section_count
231
+ logger.info(f"Continuing analysis (attempt {continuation_count}/{MAX_CONTINUATIONS}, {elapsed_minutes:.1f} minutes elapsed)")
232
+ continue
233
+ else:
234
+ break
235
+ else:
236
+ logger.error(f"Claude API error: {response.status_code} - {response.text}")
237
+ return f"❌ Claude API Error: {response.status_code}"
238
+
239
+ except Exception as e:
240
+ logger.error(f"Error calling Claude API: {str(e)}")
241
+ return f"❌ Error: {str(e)}"
242
+
243
+ # Combine all sections in the correct order
244
+ final_response = combine_sections_smartly(all_sections)
245
+
246
+ # Log final results
247
+ print(f"\n=== FINAL SMART VALIDATION ===")
248
+ print(f"Total sections found: {len(all_sections)}")
249
+ print(f"All sections present: {len(all_sections) == 12}")
250
+ print(f"Missing sections: {[s for s in required_sections if s not in all_sections]}")
251
+ print(f"Total time: {(time.time() - start_time) / 60:.1f} minutes")
252
+ print(f"Total API calls: {continuation_count + 1}")
253
+ print("=" * 50)
254
+
255
+ # Add completion indicator with safety info
256
+ if continuation_count > 0:
257
+ final_response += f"\n\n[Analysis completed in {continuation_count + 1} parts over {(time.time() - start_time) / 60:.1f} minutes]"
258
+
259
+ # Add warning if incomplete due to safety limits
260
+ if len(all_sections) < 12:
261
+ missing_sections = [s for s in required_sections if s not in all_sections]
262
+ final_response += f"\n\n⚠️ WARNING: Analysis incomplete due to safety limits. Missing sections: {', '.join(missing_sections)}"
263
+ final_response += f"\n\n💡 TIP: Try running the analysis again, or use the 'Targeted Analysis' tab to focus on specific areas."
264
+ final_response += f"\nThe 'Quick Questions' tab may also provide faster results for specific areas of interest."
265
+
266
+ return final_response
267
+
268
+ def call_claude_api_quick_analysis(prompt):
269
+ """Call Claude API for quick focused analysis - single response only"""
270
  if not ANTHROPIC_API_KEY:
271
  return "❌ Claude API key not configured. Please set ANTHROPIC_API_KEY environment variable."
272
 
 
306
  logger.error(f"Error calling Claude API: {str(e)}")
307
  return f"❌ Error: {str(e)}"
308
 
309
+ def call_claude_api(prompt):
310
+ """Call Claude API directly (legacy function for backward compatibility)"""
311
+ return call_claude_api_quick_analysis(prompt)
312
+
313
+ def answer_quick_question(transcript_content, question, age, gender, slp_notes):
314
+ """Answer a specific question about the transcript quickly"""
315
+ if not transcript_content or len(transcript_content.strip()) < 20:
316
+ return "Error: Please provide a transcript for analysis."
317
+
318
+ if not question or len(question.strip()) < 5:
319
+ return "Error: Please provide a specific question."
320
+
321
+ # Add SLP notes to the prompt if provided
322
+ notes_section = ""
323
+ if slp_notes and slp_notes.strip():
324
+ notes_section = f"""
325
+
326
+ SLP CLINICAL NOTES:
327
+ {slp_notes.strip()}
328
+ """
329
+
330
+ prompt = f"""
331
+ You are a speech-language pathologist answering a specific question about a speech sample.
332
+
333
+ Patient: {age}-year-old {gender}
334
+
335
+ TRANSCRIPT:
336
+ {transcript_content}{notes_section}
337
+
338
+ QUESTION: {question}
339
+
340
+ INSTRUCTIONS:
341
+ - Provide a focused, detailed answer to the specific question asked
342
+ - Include specific examples from the transcript with exact quotes
343
+ - Provide quantitative data when relevant (counts, percentages, rates)
344
+ - Give clinical interpretation and significance
345
+ - Keep the response focused on the question but thorough in analysis
346
+ - If the question relates to multiple areas, address all relevant aspects
347
+
348
+ Answer the question with specific evidence from the transcript:
349
+ """
350
+
351
+ return call_claude_api_quick_analysis(prompt)
352
+
353
+ def analyze_targeted_area(transcript_content, analysis_area, age, gender, slp_notes):
354
+ """Perform targeted analysis of a specific area"""
355
+ if not transcript_content or len(transcript_content.strip()) < 20:
356
+ return "Error: Please provide a transcript for analysis."
357
+
358
+ # Add SLP notes to the prompt if provided
359
+ notes_section = ""
360
+ if slp_notes and slp_notes.strip():
361
+ notes_section = f"""
362
+
363
+ SLP CLINICAL NOTES:
364
+ {slp_notes.strip()}
365
+ """
366
+
367
+ # Define analysis prompts for different areas
368
+ analysis_prompts = {
369
+ "Fluency and Disfluencies": """
370
+ Conduct a comprehensive FLUENCY ANALYSIS focusing on:
371
+
372
+ 1. DISFLUENCY TYPES AND COUNTS:
373
+ - Count and quote ALL filler words ("um", "uh", "like", "you know", etc.)
374
+ - Count and quote ALL false starts and self-corrections
375
+ - Count and quote ALL word/phrase repetitions
376
+ - Count and quote ALL revisions and restarts
377
+ - Count and quote ALL prolongations or blocks (if noted)
378
+
379
+ 2. DISFLUENCY PATTERNS:
380
+ - Calculate disfluency rate per 100 words
381
+ - Identify most frequent disfluency types
382
+ - Analyze clustering patterns
383
+ - Assess impact on communication effectiveness
384
+
385
+ 3. FLUENCY FACILITATORS:
386
+ - Identify fluent segments and their characteristics
387
+ - Note any fluency-enhancing contexts
388
+ - Assess overall speech rhythm and flow
389
+
390
+ 4. CLINICAL SIGNIFICANCE:
391
+ - Compare to age-appropriate norms
392
+ - Assess severity level
393
+ - Recommend intervention priorities
394
+ """,
395
+
396
+ "Grammar and Syntax": """
397
+ Conduct a comprehensive GRAMMATICAL ANALYSIS focusing on:
398
+
399
+ 1. MORPHOLOGICAL ANALYSIS:
400
+ - Count and categorize ALL grammatical morphemes (plurals, past tense, etc.)
401
+ - Identify missing obligatory morphemes with specific examples
402
+ - Analyze morphological error patterns
403
+
404
+ 2. SYNTACTIC STRUCTURES:
405
+ - Analyze sentence types (simple, compound, complex)
406
+ - Count and quote ALL grammatical errors
407
+ - Assess word order and sentence construction
408
+ - Evaluate use of conjunctions and subordination
409
+
410
+ 3. VERB USAGE:
411
+ - Analyze tense consistency and accuracy
412
+ - Count subject-verb agreement errors
413
+ - Assess auxiliary verb usage
414
+
415
+ 4. CLINICAL IMPLICATIONS:
416
+ - Identify primary grammatical targets for intervention
417
+ - Assess developmental appropriateness
418
+ - Recommend specific therapy goals
419
+ """,
420
+
421
+ "Vocabulary and Semantics": """
422
+ Conduct a comprehensive VOCABULARY ANALYSIS focusing on:
423
+
424
+ 1. LEXICAL DIVERSITY:
425
+ - Calculate Type-Token Ratio (TTR)
426
+ - Identify vocabulary sophistication levels
427
+ - Count unique vs. repeated words
428
+
429
+ 2. SEMANTIC ACCURACY:
430
+ - Identify and quote ALL semantic errors or inappropriate word choices
431
+ - Analyze word-finding difficulties and circumlocutions
432
+ - Assess precision of vocabulary use
433
+
434
+ 3. VOCABULARY CATEGORIES:
435
+ - Categorize vocabulary by semantic fields
436
+ - Identify high-frequency vs. low-frequency words
437
+ - Assess academic vs. conversational vocabulary
438
+
439
+ 4. WORD RETRIEVAL:
440
+ - Count and analyze word-finding pauses
441
+ - Identify compensatory strategies
442
+ - Assess overall lexical access efficiency
443
+
444
+ 5. CLINICAL RECOMMENDATIONS:
445
+ - Identify vocabulary intervention targets
446
+ - Recommend strategies for word-finding support
447
+ """,
448
+
449
+ "Pragmatics and Discourse": """
450
+ Conduct a comprehensive PRAGMATIC ANALYSIS focusing on:
451
+
452
+ 1. DISCOURSE ORGANIZATION:
453
+ - Analyze topic initiation, maintenance, and transitions
454
+ - Assess narrative structure and coherence
455
+ - Evaluate logical sequencing of ideas
456
+
457
+ 2. CONVERSATIONAL SKILLS:
458
+ - Assess turn-taking appropriateness
459
+ - Analyze response relevance and appropriateness
460
+ - Evaluate social communication effectiveness
461
+
462
+ 3. REFERENTIAL COMMUNICATION:
463
+ - Count and analyze unclear pronoun references
464
+ - Assess use of demonstratives ("this," "that")
465
+ - Evaluate overall referential clarity
466
+
467
+ 4. PRAGMATIC APPROPRIATENESS:
468
+ - Identify any inappropriate content or responses
469
+ - Assess contextual appropriateness
470
+ - Evaluate social awareness in communication
471
+
472
+ 5. CLINICAL IMPLICATIONS:
473
+ - Identify pragmatic intervention priorities
474
+ - Recommend social communication goals
475
+ """,
476
+
477
+ "Sentence Complexity": """
478
+ Conduct a comprehensive SENTENCE COMPLEXITY ANALYSIS focusing on:
479
+
480
+ 1. SENTENCE TYPES:
481
+ - Count and categorize simple sentences with examples
482
+ - Count and categorize compound sentences with examples
483
+ - Count and categorize complex sentences with examples
484
+
485
+ 2. CLAUSE ANALYSIS:
486
+ - Calculate clauses per utterance
487
+ - Analyze subordinate clause usage
488
+ - Assess coordination patterns
489
+
490
+ 3. PHRASE STRUCTURES:
491
+ - Analyze noun phrase complexity
492
+ - Assess verb phrase elaboration
493
+ - Evaluate prepositional phrase usage
494
+
495
+ 4. SYNTACTIC MATURITY:
496
+ - Calculate Mean Length of Utterance (MLU)
497
+ - Assess developmental appropriateness
498
+ - Identify areas for syntactic growth
499
+
500
+ 5. CLINICAL RECOMMENDATIONS:
501
+ - Identify targets for increasing complexity
502
+ - Recommend scaffolding strategies
503
+ """,
504
+
505
+ "Word Finding and Retrieval": """
506
+ Conduct a comprehensive WORD RETRIEVAL ANALYSIS focusing on:
507
+
508
+ 1. WORD-FINDING DIFFICULTIES:
509
+ - Count and quote ALL instances of word-finding pauses
510
+ - Identify and quote ALL circumlocutions
511
+ - Count and quote ALL use of generic terms ("thing," "stuff")
512
+
513
+ 2. RETRIEVAL STRATEGIES:
514
+ - Identify compensatory strategies used
515
+ - Analyze self-cueing attempts
516
+ - Assess success rate of retrieval attempts
517
+
518
+ 3. ERROR PATTERNS:
519
+ - Categorize word-finding errors by type
520
+ - Identify semantic vs. phonological retrieval issues
521
+ - Analyze error consistency patterns
522
+
523
+ 4. CONTEXTUAL FACTORS:
524
+ - Identify contexts that facilitate vs. hinder retrieval
525
+ - Assess impact of topic familiarity
526
+ - Evaluate effect of linguistic complexity on retrieval
527
+
528
+ 5. CLINICAL IMPLICATIONS:
529
+ - Recommend word-finding intervention strategies
530
+ - Identify cueing hierarchies to trial
531
+ - Suggest compensatory strategy training
532
+ """
533
+ }
534
+
535
+ if analysis_area not in analysis_prompts:
536
+ return f"Error: Analysis area '{analysis_area}' not recognized. Available areas: {', '.join(analysis_prompts.keys())}"
537
+
538
+ prompt = f"""
539
+ You are a speech-language pathologist conducting a targeted analysis of a specific area.
540
+
541
+ Patient: {age}-year-old {gender}
542
+
543
+ TRANSCRIPT:
544
+ {transcript_content}{notes_section}
545
+
546
+ ANALYSIS FOCUS: {analysis_area}
547
+
548
+ {analysis_prompts[analysis_area]}
549
+
550
+ INSTRUCTIONS:
551
+ - Provide specific examples with exact quotes from the transcript
552
+ - Include quantitative data (counts, percentages, rates per 100 words)
553
+ - Give detailed clinical interpretation
554
+ - Provide specific, measurable recommendations
555
+ - Be thorough but focused on the specified area
556
+
557
+ Conduct the targeted analysis:
558
+ """
559
+
560
+ return call_claude_api_quick_analysis(prompt)
561
+
562
  def check_annotation_completeness(original_transcript, annotated_transcript):
563
  """Check if annotation is complete by verifying last 3 words are present"""
564
  import re
 
1941
  show_copy_button=True
1942
  )
1943
 
1944
+
1945
+ with gr.Tab(" Quick Questions"):
1946
+ gr.Markdown("### Ask specific questions about the transcript")
1947
 
1948
  with gr.Row():
1949
  with gr.Column():
1950
+ transcript_input_4 = gr.Textbox(
1951
+ label="Speech Transcript",
1952
+ placeholder="Paste the speech transcript here...",
1953
+ lines=8
1954
+ )
1955
+
1956
+ question_input = gr.Textbox(
1957
+ label="Your Question",
1958
+ placeholder="e.g., How many filler words are used? What grammatical errors are present?",
1959
+ lines=2
1960
  )
1961
 
1962
  with gr.Row():
1963
+ age_input_4 = gr.Textbox(label="Age", value="45")
1964
+ gender_input_4 = gr.Dropdown(
1965
  label="Gender",
1966
  choices=["Male", "Female", "Other"],
1967
  value="Male"
1968
  )
1969
 
1970
+ slp_notes_input_4 = gr.Textbox(
1971
  label="SLP Clinical Notes (Optional)",
1972
+ lines=2
1973
  )
1974
 
1975
+ # Quick question examples
1976
+ gr.Markdown("**Example Questions:**")
1977
+ with gr.Row():
1978
+ q1_btn = gr.Button("Count filler words", size="sm", variant="secondary")
1979
+ q2_btn = gr.Button("Grammar errors?", size="sm", variant="secondary")
1980
+ q3_btn = gr.Button("Vocabulary level?", size="sm", variant="secondary")
1981
+
1982
+ with gr.Row():
1983
+ q4_btn = gr.Button("Sentence complexity?", size="sm", variant="secondary")
1984
+ q5_btn = gr.Button("Word finding issues?", size="sm", variant="secondary")
1985
+ q6_btn = gr.Button("Fluency problems?", size="sm", variant="secondary")
1986
+
1987
+ example_btn_4 = gr.Button("📄 Load Example Transcript", variant="secondary", size="sm")
1988
+ ask_question_btn = gr.Button("❓ Ask Question", variant="primary")
1989
 
1990
  with gr.Column():
1991
+ question_output = gr.Textbox(
1992
+ label="Answer",
1993
+ lines=15,
1994
+ show_copy_button=True
1995
+ )
1996
+
1997
+ with gr.Tab("🎯 Targeted Analysis"):
1998
+ gr.Markdown("### Focus on specific areas of speech and language")
1999
+
2000
+ with gr.Row():
2001
+ with gr.Column():
2002
+ transcript_input_5 = gr.Textbox(
2003
+ label="Speech Transcript",
2004
+ placeholder="Paste the speech transcript here...",
2005
+ lines=8
2006
+ )
2007
+
2008
+ analysis_area = gr.Dropdown(
2009
+ label="Analysis Focus",
2010
+ choices=[
2011
+ "Fluency and Disfluencies",
2012
+ "Grammar and Syntax",
2013
+ "Vocabulary and Semantics",
2014
+ "Pragmatics and Discourse",
2015
+ "Sentence Complexity",
2016
+ "Word Finding and Retrieval"
2017
+ ],
2018
+ value="Fluency and Disfluencies"
2019
+ )
2020
+
2021
+ with gr.Row():
2022
+ age_input_5 = gr.Textbox(label="Age", value="45")
2023
+ gender_input_5 = gr.Dropdown(
2024
+ label="Gender",
2025
+ choices=["Male", "Female", "Other"],
2026
+ value="Male"
2027
+ )
2028
+
2029
+ slp_notes_input_5 = gr.Textbox(
2030
+ label="SLP Clinical Notes (Optional)",
2031
+ lines=2
2032
+ )
2033
+
2034
+ example_btn_5 = gr.Button("📄 Load Example Transcript", variant="secondary", size="sm")
2035
+ targeted_analysis_btn = gr.Button("🎯 Run Targeted Analysis", variant="primary")
2036
+
2037
+ with gr.Column():
2038
+ targeted_output = gr.Textbox(
2039
+ label="Targeted Analysis Results",
2040
+ lines=15,
2041
  show_copy_button=True
2042
  )
2043
 
2044
  # Event handlers - now all components are defined
2045
  example_btn.click(fn=lambda: example_transcript, outputs=[transcript_input])
2046
  example_btn_2.click(fn=lambda: example_transcript, outputs=[transcript_input_2])
2047
+ example_btn_4.click(fn=lambda: example_transcript, outputs=[transcript_input_4])
2048
+ example_btn_5.click(fn=lambda: example_transcript, outputs=[transcript_input_5])
2049
+
2050
+ # Quick question button handlers
2051
+ q1_btn.click(fn=lambda: "How many filler words (um, uh, like, you know) are used in this transcript? Provide exact counts and examples.", outputs=[question_input])
2052
+ q2_btn.click(fn=lambda: "What grammatical errors are present in this transcript? List all errors with specific examples and corrections.", outputs=[question_input])
2053
+ q3_btn.click(fn=lambda: "What is the vocabulary level and sophistication in this transcript? Analyze word choice and complexity.", outputs=[question_input])
2054
+ q4_btn.click(fn=lambda: "How complex are the sentences in this transcript? Analyze sentence types and structures used.", outputs=[question_input])
2055
+ q5_btn.click(fn=lambda: "Are there any word-finding difficulties or retrieval issues? Identify specific examples and patterns.", outputs=[question_input])
2056
+ q6_btn.click(fn=lambda: "What fluency problems or disfluencies are present? Count and categorize all instances.", outputs=[question_input])
2057
 
2058
  file_input.change(
2059
  fn=process_file,
 
2277
  4. Complete ALL 12 sections of the comprehensive analysis
2278
 
2279
  COMPREHENSIVE SPEECH SAMPLE ANALYSIS:
2280
+
2281
+ CRITICAL: Provide EXTENSIVE detail with ALL possible examples for each category. Quote liberally from the transcript and provide comprehensive breakdowns.
2282
+
2283
+ 1. SPEECH FACTORS (with EXHAUSTIVE detail and ALL examples):
2284
+
2285
+ A. Fluency Issues:
2286
+ - Filler words (total: [count]):
2287
+ * "um" ([count]): "Um, it has two..."
2288
+ * "like" ([count]): "is like looks silver", "like fits people"
2289
+ * "I don't know" ([count]): "I don't know how to say..."
2290
+ * Other fillers: [list with counts]
2291
+ - False starts/self-corrections ([count]):
2292
+ * "My bike is like looks silver"
2293
+ * [List other examples]
2294
+ - Repetitions ([count]):
2295
+ * Word repetitions: "golf cart(s)" (Xx), "back" (Xx)
2296
+ * [List other patterns]
2297
+
2298
+ B. Word Retrieval Issues:
2299
+ - Circumlocution ([count]):
2300
+ * "this type of fish in it" (for "anchovies")
2301
+ * "where you go golfing and the golf clubs are in back"
2302
+ - Incomplete thoughts ([count]):
2303
+ * "Like I've seen like a I don't know..."
2304
+ - Word-finding pauses ([count]): [brief description]
2305
+ - Generic language ([count]): "thing," "stuff," "something"
2306
+
2307
+ C. Grammatical Errors:
2308
+ - Subject-verb agreement ([count]): "there is another type"
2309
+ - Verb tense errors ([count]): [examples]
2310
+ - Pronoun errors ([count]): [examples]
2311
+ - Run-on sentences ([count]): [examples]
2312
+
2313
+ 2. LANGUAGE SKILLS ASSESSMENT (with comprehensive evidence):
2314
+
2315
+ A. Lexical/Semantic Skills:
2316
+ - Type-Token Ratio: [number] unique words/[number] total words
2317
+ - Vocabulary examples:
2318
+ * Advanced vocabulary: "churrasco," "lo mein," "anchovies"
2319
+ * Transportation terms: "bike," "golf cart," "wheels"
2320
+ * Food terms: "caesar salad," "hummus," "pita bread"
2321
+ - Semantic relationships:
2322
+ * Categories: [examples of categorization]
2323
+ * Part-whole: bike parts, food components
2324
+ * Cause-effect: "pumped up → feels better"
2325
+
2326
+ B. Syntactic Skills:
2327
+ - Sentence types:
2328
+ * Simple sentences: [count]
2329
+ * Compound sentences: [count]
2330
+ * Complex sentences: [count]
2331
+ - MLU: [number] words, [number] morphemes
2332
+ - Average sentence length: [number] words
2333
+
2334
+ C. Supralinguistic Skills:
2335
+ - Cause-effect relationships ([count]):
2336
+ * "If you do not have a golf cart driving license you can get busted"
2337
+ * "It feels much better now that it's pumped"
2338
+ - Inferences: [count with examples]
2339
+ - Problem-solving language: [count with examples]
2340
+
2341
+ 3. COMPLEX SENTENCE ANALYSIS (with ALL examples and counts):
2342
+
2343
+ A. Coordinating Conjunctions:
2344
+ - "and": [count]
2345
+ - "but": [count]
2346
+ - "or": [count]
2347
+ - "so": [count]
2348
+ - "because": [count]
2349
+
2350
+ B. Subordinating Conjunctions:
2351
+ - "because": [count]
2352
+ - "when": [count]
2353
+ - "if": [count]
2354
+ - "that": [count]
2355
+ - "where": [count]
2356
+
2357
+ C. Sentence Structure Analysis:
2358
+ - Average sentence length: [number] words
2359
+ - Sentence complexity: [brief description of patterns]
2360
+
2361
+ 4. FIGURATIVE LANGUAGE ANALYSIS (with ALL examples):
2362
+
2363
+ A. Similes and Metaphors:
2364
+ - Similes ([count]): [examples]
2365
+ - Metaphors ([count]): [examples]
2366
+ - "Like" as filler vs. comparison: [brief analysis]
2367
+
2368
+ B. Idioms and Non-literal Language:
2369
+ - Idioms ([count]): "get busted"
2370
+ - Colloquialisms ([count]): [examples]
2371
+
2372
+ 5. PRAGMATIC LANGUAGE ASSESSMENT (with detailed examples):
2373
+
2374
+ A. Discourse Management:
2375
+ - Topic shifts: [count] - quote ALL transitions:
2376
+ * Bike → Golf carts: Quote exact transition
2377
+ * Golf carts → Food: Quote exact transition
2378
+ * Food → Cookies: Quote exact transition
2379
+ - Topic maintenance analysis:
2380
+ * Golf cart topic: [X utterances] - quote entire sequence
2381
+ * Food topic: [X utterances] - quote entire sequence
2382
+ - Topic elaboration: Count details provided per topic
2383
+
2384
+ B. Referential Communication:
2385
+ - Pronoun reference errors: [count] - quote ALL unclear references
2386
+ - Demonstrative use: [count] - quote ALL "this," "that" uses
2387
+ - Referential clarity: Analyze with specific examples
2388
+
2389
+ 6. VOCABULARY AND SEMANTIC ANALYSIS (comprehensive breakdown):
2390
+
2391
+ A. Vocabulary Diversity:
2392
+ - ALL lexical diversity measures with interpretations:
2393
+ * Simple TTR: [number] - age comparison
2394
+ * MTLD: [number] - clinical interpretation
2395
+ * HDD: [number] - vocabulary range assessment
2396
+ * MATTR: [number] - moving average interpretation
2397
+ - Most frequent words: List top 20 with frequencies
2398
+ - Vocabulary sophistication by domain with examples
2399
+
2400
+ B. Semantic Relationships:
2401
+ - Word associations: Analyze patterns with examples
2402
+ - Semantic categories: List ALL categories used
2403
+ - Synonym/antonym use: Quote ALL instances
2404
+ - Semantic precision: Analyze accuracy with examples
2405
+
2406
+ 7. MORPHOLOGICAL AND PHONOLOGICAL ANALYSIS (detailed breakdown):
2407
+
2408
+ A. Morphological Markers:
2409
+ - Plurals: [count] - list ALL regular and irregular examples
2410
+ - Verb tenses: Break down by type with ALL examples:
2411
+ * Present tense: List 20+ examples
2412
+ * Past tense regular: List ALL examples
2413
+ * Past tense irregular: List ALL examples
2414
+ - Progressive forms: [count] - list ALL "-ing" examples
2415
+ - Possessives: [count] - list ALL examples
2416
+ - Compound words: [count] - list ALL examples
2417
+ - Derivational morphemes: [count] - list ALL prefixes/suffixes
2418
+
2419
+ B. Phonological Patterns:
2420
+ - Articulation accuracy: Note any sound errors
2421
+ - Syllable structure: Analyze complexity with examples
2422
+ - Prosodic patterns: Describe rhythm and stress
2423
+
2424
+ 8. COGNITIVE-LINGUISTIC FACTORS (with specific evidence):
2425
+
2426
+ A. Working Memory:
2427
+ - Longest successful utterance: [word count] - quote entire utterance
2428
+ - Complex information management: Quote examples of multi-part descriptions
2429
+ - Information retention across narrative: Analyze with examples
2430
+
2431
+ B. Processing Speed:
2432
+ - Word-finding efficiency: [count delays] - quote ALL instances
2433
+ - Response fluency: Analyze patterns with examples
2434
+ - Processing load indicators: Identify with examples
2435
+
2436
+ C. Executive Function:
2437
+ - Self-monitoring: [count] - quote ALL self-correction instances
2438
+ - Planning evidence: Analyze organization with examples
2439
+ - Cognitive flexibility: Analyze topic management with examples
2440
+
2441
+ 9. FLUENCY AND RHYTHM ANALYSIS (comprehensive measurement):
2442
+
2443
+ A. Disfluency Patterns:
2444
+ - Total disfluency count: [number] with rate per 100 words
2445
+ - Disfluency types breakdown with ALL examples
2446
+ - Severity assessment compared to age norms
2447
+ - Impact on communication effectiveness
2448
 
2449
+ B. Language Flow:
2450
+ - Natural pause patterns: [count] - identify ALL appropriate pauses
2451
+ - Disrupted flow instances: [count] - quote ALL with analysis
2452
+ - Rhythm variation: Describe patterns with examples
2453
+
2454
+ 10. QUANTITATIVE METRICS (report ALL data with calculations shown):
2455
+ - Total words: [count] (method of counting explained)
2456
+ - Total sentences: [count] (criteria for sentence boundaries)
2457
+ - Unique words: [count] (how uniqueness determined)
2458
+ - MLU words: [calculation] ([total words]/[utterances])
2459
+ - MLU morphemes: [calculation] ([total morphemes]/[utterances])
2460
+ - ALL lexical diversity measures: [list with values and interpretations]
2461
+ - Error rates: [calculations] for each error type
2462
+ - Age-appropriate comparisons for ALL measures
2463
+
2464
+ 11. CLINICAL IMPLICATIONS (evidence-based with priorities):
2465
+
2466
+ A. Strengths (ranked with evidence):
2467
+ 1. [Primary strength] - specific evidence with quotes and data
2468
+ 2. [Secondary strength] - specific evidence with quotes and data
2469
+ 3. [Continue ranking ALL strengths with supporting evidence]
2470
+
2471
+ B. Areas of Need (prioritized by severity):
2472
+ 1. [Highest priority] - severity data, impact analysis, examples
2473
+ 2. [Second priority] - severity data, impact analysis, examples
2474
+ 3. [Continue with ALL areas needing intervention]
2475
+
2476
+ C. Treatment Recommendations (specific and measurable):
2477
+ 1. [Primary goal] - specific techniques, frequency, duration, success criteria
2478
+ 2. [Secondary goal] - specific techniques, frequency, duration, success criteria
2479
+ 3. [Continue with ALL treatment recommendations]
2480
+
2481
+ 12. PROGNOSIS AND SUMMARY (comprehensive profile):
2482
+ - Overall severity rating: [level] with detailed justification
2483
+ - Developmental appropriateness: Compare ALL skills to age expectations
2484
+ - Functional communication impact: Real-world implications
2485
+ - Prognosis: Specific predictions with timelines and success indicators
2486
+ - Monitoring plan: Specific measures and reassessment schedule
 
 
 
 
 
 
 
2487
 
2488
  12. PROGNOSIS AND SUMMARY:
2489
  Overall profile based on comprehensive verified data
 
2546
  outputs=[annotation_output]
2547
  )
2548
 
2549
+
2550
+
2551
+ # Quick Questions event handler
2552
+ ask_question_btn.click(
2553
+ fn=answer_quick_question,
2554
+ inputs=[transcript_input_4, question_input, age_input_4, gender_input_4, slp_notes_input_4],
2555
+ outputs=[question_output]
2556
+ )
2557
+
2558
+ # Targeted Analysis event handler
2559
+ targeted_analysis_btn.click(
2560
+ fn=analyze_targeted_area,
2561
+ inputs=[transcript_input_5, analysis_area, age_input_5, gender_input_5, slp_notes_input_5],
2562
+ outputs=[targeted_output]
 
 
 
 
 
2563
  )
2564
 
2565
  if __name__ == "__main__":