π PriVAL: Prompt Input VALidation Toolkit
PriVAL is a lightweight and extensible toolkit for evaluating the quality of prompts for LLMs.
It provides multi-dimensional scoring and improvement suggestions, helping you write better prompts that deliver more reliable model outputs.
β¨ Features
- Multi-dimensional scoring: Covers clarity, ambiguity, injection risk, relevance, and more.
- Pluggable detectors: Each dimension is modularβeasy to extend or customize.
- One-line evaluation: Just
evaluate_prompt(prompt)
to get structured scores and suggestions. - Flexible config: Easily enable/disable dimensions and adjust weights or thresholds.
- Report generation: Output in JSON / Markdown / HTML formats.
π¦ Installation
# Basic (recommended)
pip install prival
# Install a specific version
pip install prival==0.1.9
# Full version (includes spaCy-based analysis)
pip install prival[full]
β οΈ macOS or lightweight environments may encounter issues with spaCy or language-tool-python.
If youβre not using syntax/structure-related dimensions, install the base version only.
π§ͺ Quick Example
from prival import evaluate_prompt
prompt = "Please write a gentle yet firm resignation letter."
result = evaluate_prompt(prompt)
print(result["total_score"])
print(result["clarity"])
print(result["suggestions"])
π οΈ Project Structure
prival/
βββ config.yaml # Global config: dimensions, weights, thresholds
βββ core.py # Main logic: detector routing + aggregation
βββ detectors/ # Each validation dimension as standalone module
βββ scoring.py # Weighted score logic
βββ report.py # Output as Markdown / HTML
βββ utils/ # NLP helpers (syntax, keywords, embeddings)
βββ tests/ # Unit tests + example prompts
π§© Config Example
enabled_dimensions:
- clarity
- ambiguity
- step_guidance
- injection_risk
# ...
weights:
clarity: 0.15
ambiguity: 0.10
step_guidance: 0.10
injection_risk: 0.15
# ...
thresholds:
clarity: 0.6
injection_risk: 0.5
π Output + Reporting
Each result contains: β’ score: a float value (0.0 ~ 1.0) β’ suggestions: concrete suggestions for improvement Example output:
{
"clarity": { "score": 0.9, "suggestions": [] },
"step_guidance": { "score": 0.3, "suggestions": ["Add step-by-step hints."] },
"total_score": 0.72
}
To export a nice visual report:
from prival.report import generate_html
generate_html(result, "report.html")
π€ CLI (coming soon)
prival-cli evaluate "Your prompt text here"
π License
MIT License. Feel free to fork, extend, or integrate into your own LLM projects.
π Feedback
Issues, suggestions, or PRs are warmly welcomed: https://github.com/EugeneXiang or ping me on Hugging Face
Happy prompting! π Let your prompts shine β¨
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support