import os import gradio as gr from groq import Groq GROQ_API_KEY = os.environ.get("not_your_avg_key_mate") client = Groq(api_key=GROQ_API_KEY) PROMPT_ADVANCE = os.environ.get("prompt_code") FOOTER_TEXT = """ """ MAIN_PAGE_DISCLAIMER = """

Disclaimer: This app is designed to generate human-readable explanations for Leetcode solutions. Its primary purpose is to help you revisit your own logic later or understand unfamiliar (a.k.a. alien 👽) code in a clean, structured, and fun way. Please review all outputs before sharing publicly, and treat the explanations as supportive drafts — not final answers.

""" RAW_MARKDOWN_DISCLAIMER = """

Heads-up: This is the raw markdown output generated for your Leetcode solution. It's copy-paste ready for platforms like Leetcode Discuss, GitHub, or personal blogs. Feel free to tweak the tone, structure, or formatting to better match your voice or audience.

""" INSTRUCTIONS = """

Instructions & a Little Backstory 📖:

So here’s the deal — I built this tool just for fun (and for my own sanity). 😅 After solving a DSA problem, the last thing I wanted was to spend 15 more minutes formatting it for Leetcode Discuss. Clean markdown, proper explanation, structure — it adds up.

That’s where this app comes in clutch. I just drop my magic spell — a.k.a. my code — into the input box, click a button, and *BOOM*, it generates a beautiful, structured explanation with markdown. Upload-ready, dev-friendly, and cheeky as hell. Isn’t that fun?

By default, it follows a format I personally prompt-engineered 🧠 — one that includes sections like Intuition 💡, Approach 🪜, Code 👨🏽‍💻, and more — designed to make your solution stand out and feel human. But hey, you’re free to tweak it however you want! The output is pure markdown, so go wild if needed.

How to use:

This app's just my little productivity sidekick — hope it becomes yours too. If you vibe with it, feel free to drop a like, or connect on LinkedIn. Let’s grow together.

""" TITLE = "

WhatTheCode() 💻🤯

" def generate_response(message: str, system_prompt: str, temperature: float, max_tokens: int): conversation = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": message} ] response = client.chat.completions.create( model="meta-llama/llama-4-scout-17b-16e-instruct", messages=conversation, temperature=temperature, max_tokens=max_tokens, stream=False ) return response.choices[0].message.content def analyze_solution(code, temperature, max_tokens): prompt = f""" {PROMPT_ADVANCE} {code} """ markdown_output = generate_response(prompt, "You are an expert Leetcode Solution Writer.", temperature, max_tokens) return markdown_output, markdown_output # Both rendered and raw # UI Block with gr.Blocks(theme="gradio/gsoft") as demo: gr.HTML(TITLE) with gr.Tab("Explain My Solution 🚀"): gr.HTML(INSTRUCTIONS) gr.HTML(MAIN_PAGE_DISCLAIMER) code = gr.Textbox(label="Description", lines=5, placeholder="Paste your code or problem here...") analyze_btn = gr.Button("Get my Solution Fasttttt") with gr.Row(): output_rendered = gr.Markdown(label="Formatted Markdown") # Rendered with gr.Accordion("Tweak Zone 🔧", open=True): temperature = gr.Slider(0, 1, 0.1, value=0.5, label="Imagination Dial 🎨 (Temperature)") max_tokens = gr.Slider(50, 1024, step=1, value=1024, label="How Much It’ll Say ✍️ (Tokens)") with gr.Tab("Raw Markdown Output 📝"): gr.HTML(RAW_MARKDOWN_DISCLAIMER) raw_markdown_box = gr.Textbox(label="Raw Markdown", lines=20, show_copy_button=True) analyze_btn.click( analyze_solution, inputs=[code, temperature, max_tokens], outputs=[output_rendered, raw_markdown_box] ) gr.HTML(FOOTER_TEXT) if __name__ == "__main__": demo.launch()