whackthejacker's picture
Update app.py
8f0af34 verified
raw
history blame
2.75 kB
Here is an example of a Gradio interface code generation builder that meets your requirements:
import gradio as gr
from transformers import CodeT5ForConditionalGeneration, CodeT5Tokenizer
# Initialize the CodeT5 model and tokenizer
model = CodeT5ForConditionalGeneration.from_pretrained(" Salesforce/code-t5-base")
tokenizer = CodeT5Tokenizer.from_pretrained("Salesforce/code-t5-base")
# Define the Gradio interface
demo = gr.Interface(
fn=lambda input_code, upload_file, temperature, max_length: generate_code(input_code, upload_file, temperature, max_length),
inputs=[
________gr.Textbox(label="Input_Code/Prompt"),
________gr.File(label="Upload_Code_File"),
________gr.Slider(label="Temperature",_minimum=0,_maximum=1,_default=0.5),
________gr.Slider(label="Max_Length",_minimum=10,_maximum=512,_default=256)
____],
outputs=[
________gr.Code(label="Generated_Code"),
________gr.Textbox(label="Conversation_History")
____],
title="CodeT5 Code Generation Builder",
description="Generate code snippets using CodeT5 and interact with the AI model through a simple web interface."
)
def generate_code(input_code, upload_file, temperature, max_length):
# Preprocess the input code and uploaded file
if upload_file is not None:
with open(upload_file.name, 'r') as file:
input_code = file.read()
# Tokenize the input code
input_ids = tokenizer.encode(input_code, return_tensors='pt')
# Generate code using CodeT5
output = model.generate(input_ids, temperature=temperature, max_length=max_length)
# Convert the output to a string
generated_code = tokenizer.decode(output[0], skip_special_tokens=True)
# Update the conversation history
conversation_history = f"Input Code: {input_code}\nGenerated Code: {generated_code}"
return generated_code, conversation_history
# Launch the Gradio interface
demo.launch()
This code defines a Gradio interface that takes four inputs:
A text box for inputting code or prompts
A file uploader for uploading code files
A temperature slider to adjust the generation temperature
A max length slider to adjust the maximum generated code length
The interface returns two outputs:
A code box displaying the generated code
A text box displaying the conversation history (including the input code and generated code)
When the user interacts with the interface, the generate_code function is called, which preprocesses the input code and uploaded file, tokenizes the input code, generates code using CodeT5, and converts the output to a string. The conversation history is also updated accordingly.
Note that you need to install the transformers library and have the CodeT5 model and tokenizer downloaded for this code to work.