GenAIDevTOProd commited on
Commit
bccc7f1
·
verified ·
1 Parent(s): 327fc24

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PromptGuru
2
+
3
+ **Modular prompt engineering library** for BERT, Mistral, LLaMA, and FLAN-T5 using YAML templates.
4
+ Includes modes like **ELI5**, **DevMode**, **Refine**, **Classification**, and **QA**.
5
+
6
+ ## Why
7
+ - Lightweight and framework-agnostic
8
+ - YAML-first: edit prompts without changing code
9
+ - Consistent modes across multiple model families
10
+
11
+ ## Install (Local Dev)
12
+ ```bash
13
+ pip install PyYAML
14
+ ```
15
+ > For now, clone or copy this repo. PyPI packaging steps are included below.
16
+
17
+ ## Usage
18
+ ```python
19
+ from promptguru.engine import PromptEngine
20
+
21
+ engine = PromptEngine(model_type="mistral", mode="eli5")
22
+ prompt = engine.generate_prompt("What is quantum entanglement?")
23
+ print(prompt)
24
+ ```
25
+
26
+ ## Templates
27
+ Templates live in `promptguru/templates/`:
28
+ - `bert.yaml` → `classification`, `fill_mask`, `qa`
29
+ - `mistral.yaml` → `eli5`, `devmode`, `refine`
30
+ - `llama.yaml` → `eli5`, `devmode`, `refine`
31
+ - `flan_t5.yaml` → `eli5`, `devmode`, `explain_and_tag`
32
+
33
+ ## Roadmap
34
+ - Add inference adapters (HF Inference API, OpenRouter) behind a common interface
35
+ - Add more modes (contrastive QA, chain-of-thought, safety/risk tags)
36
+
37
+ ## License
38
+ Apache 2.0
promptguru.egg-info/PKG-INFO ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: promptguru
3
+ Version: 0.1.0
4
+ Summary: Modular prompt engineering library
5
+ Author-email: Naga Adithya Kaushik <[email protected]>
6
+ License: Apache-2.0
7
+ Project-URL: Homepage, https://huggingface.co/spaces/GenAIDevTOProd/PromptGuru
8
+ Keywords: prompts,nlp,llm,huggingface,templates
9
+ Requires-Python: >=3.8
10
+ Description-Content-Type: text/markdown
11
+ Requires-Dist: PyYAML>=6.0
12
+
13
+ # PromptGuru
14
+
15
+ **Modular prompt engineering library** for BERT, Mistral, LLaMA, and FLAN-T5 using YAML templates.
16
+ Includes modes like **ELI5**, **DevMode**, **Refine**, **Classification**, and **QA**.
17
+
18
+ ## Why
19
+ - Lightweight and framework-agnostic
20
+ - YAML-first: edit prompts without changing code
21
+ - Consistent modes across multiple model families
22
+
23
+ ## Install (Local Dev)
24
+ ```bash
25
+ pip install PyYAML
26
+ ```
27
+ > For now, clone or copy this repo. PyPI packaging steps are included below.
28
+
29
+ ## Usage
30
+ ```python
31
+ from promptguru.engine import PromptEngine
32
+
33
+ engine = PromptEngine(model_type="mistral", mode="eli5")
34
+ prompt = engine.generate_prompt("What is quantum entanglement?")
35
+ print(prompt)
36
+ ```
37
+
38
+ ## Templates
39
+ Templates live in `promptguru/templates/`:
40
+ - `bert.yaml` → `classification`, `fill_mask`, `qa`
41
+ - `mistral.yaml` → `eli5`, `devmode`, `refine`
42
+ - `llama.yaml` → `eli5`, `devmode`, `refine`
43
+ - `flan_t5.yaml` → `eli5`, `devmode`, `explain_and_tag`
44
+
45
+ ## Roadmap
46
+ - Add inference adapters (HF Inference API, OpenRouter) behind a common interface
47
+ - Add more modes (contrastive QA, chain-of-thought, safety/risk tags)
48
+
49
+ ## License
50
+ Apache 2.0
promptguru.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ README.md
2
+ pyproject.toml
3
+ promptguru/__init__.py
4
+ promptguru/engine.py
5
+ promptguru.egg-info/PKG-INFO
6
+ promptguru.egg-info/SOURCES.txt
7
+ promptguru.egg-info/dependency_links.txt
8
+ promptguru.egg-info/requires.txt
9
+ promptguru.egg-info/top_level.txt
10
+ promptguru/templates/bert.yaml
11
+ promptguru/templates/flan_t5.yaml
12
+ promptguru/templates/llama.yaml
13
+ promptguru/templates/mistral.yaml
promptguru.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
promptguru.egg-info/requires.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ PyYAML>=6.0
promptguru.egg-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ promptguru
promptguru/__init__.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ """PromptGuru: Modular prompt engineering with YAML templates."""
2
+ from .engine import PromptEngine, load_template
3
+ __all__ = ["PromptEngine", "load_template"]
promptguru/__pycache__/__init__.cpython-311.pyc ADDED
Binary file (353 Bytes). View file
 
promptguru/__pycache__/engine.cpython-311.pyc ADDED
Binary file (2.67 kB). View file
 
promptguru/engine.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # promptguru/engine.py
2
+ from pathlib import Path
3
+ import yaml
4
+
5
+ def load_template(model_type: str) -> dict:
6
+ """Load a YAML template file for a given model type (e.g., 'mistral', 'bert')."""
7
+ model_type = model_type.lower()
8
+ template_path = Path(__file__).parent / "templates" / f"{model_type}.yaml"
9
+ if not template_path.exists():
10
+ raise FileNotFoundError(f"Template file not found for model: {model_type}")
11
+ with open(template_path, "r", encoding="utf-8") as f:
12
+ return yaml.safe_load(f)
13
+
14
+ class PromptEngine:
15
+ """Minimal prompt templating engine.
16
+
17
+ Usage:
18
+ engine = PromptEngine(model_type="mistral", mode="eli5")
19
+ prompt = engine.generate_prompt("Explain quantum entanglement")
20
+ """
21
+ def __init__(self, model_type: str, mode: str):
22
+ self.model_type = model_type.lower()
23
+ self.mode = mode.lower()
24
+ self._template_dict = load_template(self.model_type)
25
+
26
+ def generate_prompt(self, user_input: str) -> str:
27
+ """Render a template with the given input text."""
28
+ if self.mode not in self._template_dict:
29
+ raise ValueError(f"Mode '{self.mode}' not found in {self.model_type}.yaml")
30
+ template = self._template_dict[self.mode]
31
+ return template.replace("{{input}}", user_input)
promptguru/templates/bert.yaml ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ classification: |
2
+ Classify the sentiment of the following text as positive, negative, or neutral.
3
+ Text: {{input}}
4
+
5
+ fill_mask: |
6
+ Fill in the missing word in the following sentence.
7
+ Sentence: {{input}}
8
+
9
+ qa: |
10
+ Extract the answer from the given context.
11
+ Context and Question: {{input}}
promptguru/templates/flan_t5.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ eli5: |
2
+ Explain like I'm 5: {{input}}
3
+
4
+ devmode: |
5
+ Summarize technically and list risks: {{input}}
6
+
7
+ explain_and_tag: |
8
+ Summarize this input and assign a risk level (low, medium, high).
9
+ Input: {{input}}
promptguru/templates/llama.yaml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ eli5: |
2
+ <s>[INST] <<SYS>>
3
+ You are a helpful tutor. Explain the user's input simply and clearly.
4
+ <</SYS>>
5
+ {{input}} [/INST]
6
+
7
+ devmode: |
8
+ <s>[INST] <<SYS>>
9
+ You are a senior software engineer. Provide a structured technical analysis.
10
+ Include: context, constraints, assumptions, risks, and next steps.
11
+ <</SYS>>
12
+ {{input}} [/INST]
13
+
14
+ refine: |
15
+ <s>[INST] <<SYS>>
16
+ You are a precise editor. Rewrite the user's text for clarity and brevity while preserving meaning.
17
+ <</SYS>>
18
+ {{input}} [/INST]
promptguru/templates/mistral.yaml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ eli5: |
2
+ You are a helpful assistant. Explain the following like I'm five.
3
+ Input: {{input}}
4
+
5
+ devmode: |
6
+ You are a senior engineer. Provide a concise, technical breakdown of the following content.
7
+ Include assumptions, potential risks, and an action-oriented summary.
8
+ Input: {{input}}
9
+
10
+ refine: |
11
+ You are an expert editor. Improve clarity, reduce jargon, and keep key details.
12
+ Text: {{input}}
pyproject.toml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "promptguru"
7
+ version = "0.1.0"
8
+ description = "Modular prompt engineering library"
9
+ readme = "README.md"
10
+ requires-python = ">=3.8"
11
+ license = {text = "Apache-2.0"}
12
+ authors = [{name="Naga Adithya Kaushik", email="[email protected]"}]
13
+ keywords = ["prompts", "nlp", "llm", "huggingface", "templates"]
14
+ dependencies = ["PyYAML>=6.0"]
15
+
16
+ [project.urls]
17
+ Homepage = "https://huggingface.co/spaces/GenAIDevTOProd/PromptGuru"
18
+
19
+ [tool.setuptools.packages.find]
20
+ where = ["."]
21
+ include = ["promptguru"]
22
+
23
+ [tool.setuptools.package-data]
24
+ promptguru = ["templates/*.yaml"]
requirements.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ PyYAML>=6.0