grammar-space / README.md
Kavin0003
βœ… Fix README configuration for Hugging Face Space
0253f48

A newer version of the Gradio SDK is available: 5.44.1

Upgrade
metadata
title: Grammar Correction App
emoji: πŸ“
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 4.14.0
app_file: app.py
pinned: false

πŸ“ Grammar Correction App

A web-based grammar correction application built with Gradio and powered by a fine-tuned T5 transformer model. This application provides an intuitive interface for correcting grammatical errors in English text.

✨ Features

  • Real-time Grammar Correction: Instantly fix grammatical errors in your text
  • User-friendly Interface: Clean and intuitive web interface built with Gradio
  • AI-Powered: Uses the vennify/t5-base-grammar-correction model from Hugging Face
  • Example Sentences: Pre-loaded examples to demonstrate functionality
  • Error Handling: Robust error handling for edge cases

πŸš€ Quick Start

Prerequisites

  • Python 3.7 or higher
  • pip package manager

Installation

  1. Clone or download the repository

    git clone <repository-url>
    cd updated-grammar
    
  2. Install dependencies

    pip install -r requirements.txt
    
  3. Run the application

    python app.py
    
  4. Access the web interface

    • Open your browser and navigate to the URL displayed in the terminal (typically http://127.0.0.1:7860)

πŸ“‹ Dependencies

  • transformers (4.36.2): Hugging Face library for transformer models
  • torch (2.1.2): PyTorch deep learning framework
  • gradio (4.8.0): Web interface framework for machine learning models

πŸ› οΈ How It Works

  1. Model Loading: The app loads the pre-trained T5-base grammar correction model from Hugging Face
  2. Text Processing: Input text is tokenized and formatted with a "grammar:" prompt
  3. Inference: The model generates corrected text using beam search decoding
  4. Output: The corrected text is displayed in the web interface

πŸ“ Usage Examples

Try these examples in the application:

  • Input: "She go to school every day." Output: "She goes to school every day."

  • Input: "I is a boy." Output: "I am a boy."

  • Input: "He don't like apples." Output: "He doesn't like apples."

  • Input: "We was playing outside." Output: "We were playing outside."

βš™οΈ Model Configuration

The application uses the following model parameters:

  • Model: vennify/t5-base-grammar-correction
  • Max Input Length: 512 tokens
  • Max Output Length: 128 tokens
  • Beam Search: 5 beams
  • Temperature: 0.7
  • Early Stopping: Enabled

πŸ”§ Customization

Modifying Model Parameters

You can adjust the generation parameters in the correct_grammar() function:

outputs = model.generate(
    **inputs,
    max_length=128,        # Maximum output length
    num_beams=5,          # Number of beams for beam search
    early_stopping=True,   # Stop when EOS token is generated
    temperature=0.7,       # Sampling temperature
    do_sample=False       # Use deterministic generation
)

Changing the Model

To use a different grammar correction model, update the model_path variable:

model_path = "your-preferred-model-name"

🚨 Error Handling

The application includes error handling for:

  • Empty input text
  • Model inference errors
  • Tokenization issues

πŸ“± Interface Features

  • Input Box: Multi-line text area for entering text to correct
  • Output Box: Displays the corrected text
  • Examples: Click on example sentences to test the application
  • Real-time Processing: Instant correction when you submit text

🀝 Contributing

Feel free to contribute to this project by:

  1. Reporting bugs
  2. Suggesting new features
  3. Submitting pull requests
  4. Improving documentation

πŸ“„ License

This project is open source. Please check the license file for more details.

πŸ™ Acknowledgments

  • Hugging Face for providing the transformer models and libraries
  • Gradio for the excellent web interface framework
  • vennify for the pre-trained grammar correction model

πŸ”— Links


Note: The first run may take some time as the model needs to be downloaded from Hugging Face. Subsequent runs will be faster as the model will be cached locally.