Spaces:
Configuration error
Configuration error
Delete blackgat
Browse files- blackgat/README.md +0 -148
- blackgat/app/ai_modules.py +0 -52
- blackgat/app/main.py +0 -51
- blackgat/app/scanner_agent.py +0 -0
- blackgat/dashboard/dashboard.py +0 -59
- blackgat/models/load_llms.py +0 -21
- blackgat/requirements.txt +0 -8
- blackgat/spaces.yaml +0 -2
blackgat/README.md
DELETED
@@ -1,148 +0,0 @@
|
|
1 |
-
# π§ Project BlackGat AI
|
2 |
-
|
3 |
-
> _AI-powered Bug Bounty Automation Framework with Cybersecurity-Optimized LLMs_
|
4 |
-
|
5 |
-
BlackGat is a modular, LLM-augmented bug bounty system designed to streamline vulnerability discovery, chaining, and report generation. It fuses the most advanced cybersecurity-focused language models into a unified tool for ethical hackers, red teamers, and security researchers.
|
6 |
-
|
7 |
-
---
|
8 |
-
|
9 |
-
## π Key Features
|
10 |
-
|
11 |
-
- π§ **LLM-Powered Agents**:
|
12 |
-
- **HeatSeeker** β Scores targets based on risk heuristics
|
13 |
-
- **KillChainAI** β Correlates vulnerabilities to suggest exploit chains
|
14 |
-
- **SCRIBE** β Autogenerates Markdown bug bounty reports
|
15 |
-
- **ReconGPT** β Converts natural language into scanning/fuzzing tasks
|
16 |
-
- **Hackphyr Agent** β Simulates attacker behavior, suggests exploits
|
17 |
-
|
18 |
-
- π€ **Integrated LLMs from Hugging Face**:
|
19 |
-
| Model | Role |
|
20 |
-
|-------|------|
|
21 |
-
| `CyberBase-13B` | Report writing, CVE logic, PoC generation |
|
22 |
-
| `Llama-Primus-Reasoning` | Vuln correlation, chaining, logic mapping |
|
23 |
-
| `Hackphyr` | Red-team simulation, attacker tactics |
|
24 |
-
| `GPT-NeoX` | Prompt-driven recon and fuzzing suggestions |
|
25 |
-
|
26 |
-
- π§© **Modular API with FastAPI**
|
27 |
-
- π₯οΈ **Live Dashboard using Streamlit**
|
28 |
-
- βοΈ **Deployable on Hugging Face Spaces**
|
29 |
-
- π³ **Docker-compatible**
|
30 |
-
|
31 |
-
---
|
32 |
-
|
33 |
-
## π¦ Project Structure
|
34 |
-
|
35 |
-
```
|
36 |
-
blackgat/
|
37 |
-
βββ app/
|
38 |
-
β βββ main.py # FastAPI API for AI endpoints
|
39 |
-
β βββ ai_modules.py # All AI agent logic
|
40 |
-
β βββ scanner_agent.py # Reserved for future tool integrations
|
41 |
-
βββ models/
|
42 |
-
β βββ load_llms.py # LLM loader for Hugging Face models
|
43 |
-
βββ dashboard/
|
44 |
-
β βββ dashboard.py # Streamlit control panel
|
45 |
-
βββ requirements.txt # Python dependencies
|
46 |
-
βββ Dockerfile # Hugging Face & local deployment
|
47 |
-
βββ spaces.yaml # Hugging Face Space config
|
48 |
-
βββ README.md # This file
|
49 |
-
```
|
50 |
-
|
51 |
-
---
|
52 |
-
|
53 |
-
## π§ Agents & Capabilities
|
54 |
-
|
55 |
-
| Agent | LLM Used | Capability |
|
56 |
-
|---------------|-------------------------------|----------------------------------------------------------------------------|
|
57 |
-
| `HeatSeeker` | Internal scoring | Scores targets based on heuristic signals (e.g., admin pages, error codes) |
|
58 |
-
| `KillChainAI` | `TrendMicro/llama-primus-reasoning` | Suggests chained vulnerabilities (e.g., IDOR + XSS β Account Takeover) |
|
59 |
-
| `SCRIBE` | `CyberNative/CyberBase-13b` | Writes vulnerability reports in Markdown format |
|
60 |
-
| `ReconGPT` | `EleutherAI/gpt-neox-20b` | Converts natural language to recon/fuzzing actions |
|
61 |
-
| `Exploit` | `hackphyr/hackphyr` | Suggests exploit paths and red-team logic |
|
62 |
-
|
63 |
-
---
|
64 |
-
|
65 |
-
## π οΈ Installation (Local)
|
66 |
-
|
67 |
-
```bash
|
68 |
-
# Clone project
|
69 |
-
git clone https://github.com/YOUR_USERNAME/blackgat.git
|
70 |
-
cd blackgat
|
71 |
-
|
72 |
-
# Install dependencies
|
73 |
-
pip install -r requirements.txt
|
74 |
-
|
75 |
-
# Run FastAPI backend (optional)
|
76 |
-
uvicorn app.main:app --reload
|
77 |
-
|
78 |
-
# Launch dashboard
|
79 |
-
streamlit run dashboard/dashboard.py
|
80 |
-
```
|
81 |
-
|
82 |
-
---
|
83 |
-
|
84 |
-
## βοΈ Deploy to Hugging Face Spaces
|
85 |
-
|
86 |
-
1. Go to [https://huggingface.co/spaces](https://huggingface.co/spaces)
|
87 |
-
2. Create a new **Streamlit** space (call it `BlackGat-AI`)
|
88 |
-
3. Clone it:
|
89 |
-
|
90 |
-
```bash
|
91 |
-
git lfs install
|
92 |
-
git clone https://huggingface.co/spaces/YOUR_USERNAME/BlackGat-AI
|
93 |
-
cd BlackGat-AI
|
94 |
-
```
|
95 |
-
|
96 |
-
4. Copy all files into the folder and push:
|
97 |
-
|
98 |
-
```bash
|
99 |
-
cp -r ../blackgat/* .
|
100 |
-
git add .
|
101 |
-
git commit -m "Initial BlackGat LLM AI deploy"
|
102 |
-
git push
|
103 |
-
```
|
104 |
-
|
105 |
-
β
Done. Your app will build and be live at:
|
106 |
-
```
|
107 |
-
https://huggingface.co/spaces/YOUR_USERNAME/BlackGat-AI
|
108 |
-
```
|
109 |
-
|
110 |
-
---
|
111 |
-
|
112 |
-
## β
Supported Use Cases
|
113 |
-
|
114 |
-
- π΅οΈββοΈ Recon automation with LLM-driven payload suggestions
|
115 |
-
- π§ Vulnerability classification and chaining
|
116 |
-
- π AI-assisted report generation (Markdown/JSON-ready)
|
117 |
-
- βοΈ Integration-ready for Burp Suite, sqlmap, ffuf, nuclei
|
118 |
-
- π¬ Prompt-to-agent tasking for fuzzing, exploitation, and validation
|
119 |
-
|
120 |
-
---
|
121 |
-
|
122 |
-
## π§© Roadmap (Next Phases)
|
123 |
-
|
124 |
-
- [ ] Add `ffuf`, `sqlmap`, `dalfox` to the scanner agent
|
125 |
-
- [ ] CVSS-based impact scoring
|
126 |
-
- [ ] MongoDB backend for scan/result persistence
|
127 |
-
- [ ] PDF report generation & auto-submission to HackerOne/Bugcrowd
|
128 |
-
- [ ] User auth for multi-tenant dashboards
|
129 |
-
|
130 |
-
---
|
131 |
-
|
132 |
-
## π‘οΈ Ethical Usage
|
133 |
-
|
134 |
-
BlackGat is intended for use in authorized security testing, research, and learning environments. Please ensure all activities conducted using this framework comply with legal and ethical standards. **Always respect target scope and obtain proper authorization.**
|
135 |
-
|
136 |
-
---
|
137 |
-
|
138 |
-
## π§ Credits
|
139 |
-
|
140 |
-
- Hugging Face β for hosting the cybersecurity-tuned LLMs
|
141 |
-
- CyberNative, TrendMicro, Hackphyr, EleutherAI β for open-source LLMs
|
142 |
-
- OWASP, HackerOne, ExploitDB β for vulnerability reference inspiration
|
143 |
-
|
144 |
-
---
|
145 |
-
|
146 |
-
## π¬ License
|
147 |
-
|
148 |
-
MIT License β Free to use, fork, and expand with attribution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/app/ai_modules.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
# app/ai_models.py
|
2 |
-
import requests
|
3 |
-
import os
|
4 |
-
|
5 |
-
HF_API_TOKEN = os.getenv("BlackGat_AI")
|
6 |
-
if not HF_API_TOKEN:
|
7 |
-
raise RuntimeError("BlackGat_AI token not set. Please add it as a Hugging Face Space secret.")
|
8 |
-
|
9 |
-
HEADERS = {"Authorization": f"Bearer {HF_API_TOKEN}"}
|
10 |
-
|
11 |
-
def query_huggingface(prompt, model_id, max_tokens=150):
|
12 |
-
url = f"https://api-inference.huggingface.co/models/{model_id}"
|
13 |
-
payload = {
|
14 |
-
"inputs": prompt,
|
15 |
-
"parameters": {"max_new_tokens": max_tokens}
|
16 |
-
}
|
17 |
-
|
18 |
-
try:
|
19 |
-
response = requests.post(url, headers=HEADERS, json=payload, timeout=30)
|
20 |
-
response.raise_for_status()
|
21 |
-
result = response.json()
|
22 |
-
if isinstance(result, list) and 'generated_text' in result[0]:
|
23 |
-
return result[0]['generated_text']
|
24 |
-
return "[ERROR] Unexpected response format"
|
25 |
-
except requests.exceptions.RequestException as e:
|
26 |
-
return f"[ERROR] API call failed: {str(e)}"
|
27 |
-
|
28 |
-
def scribe_generate(data):
|
29 |
-
prompt = f"""Write a professional bug bounty report.
|
30 |
-
Vulnerability: {data['type']}
|
31 |
-
URL: {data['url']}
|
32 |
-
Payload: {data['payload']}
|
33 |
-
Impact: {data['impact']}"""
|
34 |
-
return {"report": query_huggingface(prompt, "schoolkithub/cyberbase", 200)}
|
35 |
-
|
36 |
-
def killchain_ai(data):
|
37 |
-
prompt = f"Suggest a chained attack path using these findings: {data}"
|
38 |
-
return {"chain": query_huggingface(prompt, "schoolkithub/primus")}
|
39 |
-
|
40 |
-
def heatseeker_score(data):
|
41 |
-
score = 0
|
42 |
-
if "admin" in data['url']: score += 3
|
43 |
-
if len(data.get("params", {})) > 3: score += 2
|
44 |
-
if data['status_code'] == 500: score += 5
|
45 |
-
return {"score": score}
|
46 |
-
|
47 |
-
def recon_gpt(prompt):
|
48 |
-
return {"task": query_huggingface(prompt, "schoolkithub/neox")}
|
49 |
-
|
50 |
-
def exploit_suggestion(data):
|
51 |
-
prompt = f"Given this bug bounty scenario, how might an attacker exploit it? {data}"
|
52 |
-
return {"exploit": query_huggingface(prompt, "schoolkithub/hackphyr")}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/app/main.py
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
# app/main.py
|
2 |
-
from fastapi import FastAPI
|
3 |
-
from pydantic import BaseModel
|
4 |
-
from typing import Dict
|
5 |
-
from app.ai_models import (
|
6 |
-
scribe_generate, killchain_ai,
|
7 |
-
heatseeker_score, recon_gpt,
|
8 |
-
exploit_suggestion
|
9 |
-
)
|
10 |
-
|
11 |
-
app = FastAPI(title="BlackGat AI", version="1.0", description="Cloud AI for Bug Bounty Hunters")
|
12 |
-
|
13 |
-
class VulnReport(BaseModel):
|
14 |
-
type: str
|
15 |
-
url: str
|
16 |
-
payload: str
|
17 |
-
impact: str
|
18 |
-
|
19 |
-
class Finding(BaseModel):
|
20 |
-
data: str
|
21 |
-
|
22 |
-
class ExploitRequest(BaseModel):
|
23 |
-
data: str
|
24 |
-
|
25 |
-
class HeatInput(BaseModel):
|
26 |
-
url: str
|
27 |
-
params: Dict[str, str]
|
28 |
-
status_code: int
|
29 |
-
|
30 |
-
class ReconPrompt(BaseModel):
|
31 |
-
prompt: str
|
32 |
-
|
33 |
-
@app.post("/scribe")
|
34 |
-
def scribe(data: VulnReport):
|
35 |
-
return scribe_generate(data.dict())
|
36 |
-
|
37 |
-
@app.post("/killchain")
|
38 |
-
def killchain(data: Finding):
|
39 |
-
return killchain_ai(data.data)
|
40 |
-
|
41 |
-
@app.post("/score")
|
42 |
-
def score(data: HeatInput):
|
43 |
-
return heatseeker_score(data.dict())
|
44 |
-
|
45 |
-
@app.post("/recon")
|
46 |
-
def recon(data: ReconPrompt):
|
47 |
-
return recon_gpt(data.prompt)
|
48 |
-
|
49 |
-
@app.post("/exploit")
|
50 |
-
def exploit(data: ExploitRequest):
|
51 |
-
return exploit_suggestion(data.data)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/app/scanner_agent.py
DELETED
File without changes
|
blackgat/dashboard/dashboard.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
# dashboard/app.py
|
2 |
-
import streamlit as st
|
3 |
-
import requests
|
4 |
-
|
5 |
-
API_URL = "http://localhost:8000"
|
6 |
-
|
7 |
-
st.set_page_config(page_title="BlackGat AI", layout="wide")
|
8 |
-
st.title("π§ BlackGat AI β Bug Bounty Automation")
|
9 |
-
|
10 |
-
tool = st.sidebar.selectbox("Choose Agent", ["Scribe", "KillChain", "HeatSeeker", "ReconGPT", "Exploit Suggestion"])
|
11 |
-
|
12 |
-
if tool == "Scribe":
|
13 |
-
st.subheader("π Generate Bug Bounty Report")
|
14 |
-
vuln_type = st.text_input("Vulnerability Type")
|
15 |
-
vuln_url = st.text_input("Vulnerable URL")
|
16 |
-
payload = st.text_input("Payload")
|
17 |
-
impact = st.text_area("Impact")
|
18 |
-
if st.button("Generate"):
|
19 |
-
res = requests.post(f"{API_URL}/scribe", json={
|
20 |
-
"type": vuln_type,
|
21 |
-
"url": vuln_url,
|
22 |
-
"payload": payload,
|
23 |
-
"impact": impact
|
24 |
-
})
|
25 |
-
st.markdown(res.json()["report"])
|
26 |
-
|
27 |
-
elif tool == "KillChain":
|
28 |
-
st.subheader("π Suggest Chained Attack")
|
29 |
-
findings = st.text_area("Paste findings (JSON or summary)")
|
30 |
-
if st.button("Suggest"):
|
31 |
-
res = requests.post(f"{API_URL}/killchain", json={"data": findings})
|
32 |
-
st.markdown(res.json()["chain"])
|
33 |
-
|
34 |
-
elif tool == "HeatSeeker":
|
35 |
-
st.subheader("π‘οΈ Risk Scoring")
|
36 |
-
url = st.text_input("Target URL")
|
37 |
-
status_code = st.number_input("Status Code", min_value=100, max_value=599, value=200)
|
38 |
-
params = st.text_input("Params JSON", '{"user":"admin"}')
|
39 |
-
if st.button("Score"):
|
40 |
-
res = requests.post(f"{API_URL}/score", json={
|
41 |
-
"url": url,
|
42 |
-
"params": eval(params),
|
43 |
-
"status_code": status_code
|
44 |
-
})
|
45 |
-
st.success(f"Score: {res.json()['score']}")
|
46 |
-
|
47 |
-
elif tool == "ReconGPT":
|
48 |
-
st.subheader("π°οΈ Recon Task Generator")
|
49 |
-
prompt = st.text_area("What do you want to find?", "Find all login endpoints.")
|
50 |
-
if st.button("Generate Task"):
|
51 |
-
res = requests.post(f"{API_URL}/recon", json={"prompt": prompt})
|
52 |
-
st.markdown(res.json()["task"])
|
53 |
-
|
54 |
-
elif tool == "Exploit Suggestion":
|
55 |
-
st.subheader("π₯ Attacker Simulation")
|
56 |
-
scenario = st.text_area("Bug scenario or data")
|
57 |
-
if st.button("Get Exploit Advice"):
|
58 |
-
res = requests.post(f"{API_URL}/exploit", json={"data": scenario})
|
59 |
-
st.markdown(res.json()["exploit"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/models/load_llms.py
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
2 |
-
|
3 |
-
def load_llms():
|
4 |
-
return {
|
5 |
-
"cyberbase": pipeline("text-generation",
|
6 |
-
model=AutoModelForCausalLM.from_pretrained("CyberNative/CyberBase-13b", device_map="auto"),
|
7 |
-
tokenizer=AutoTokenizer.from_pretrained("CyberNative/CyberBase-13b")
|
8 |
-
),
|
9 |
-
"primus": pipeline("text-generation",
|
10 |
-
model=AutoModelForCausalLM.from_pretrained("TrendMicro/llama-primus-reasoning", device_map="auto"),
|
11 |
-
tokenizer=AutoTokenizer.from_pretrained("TrendMicro/llama-primus-reasoning")
|
12 |
-
),
|
13 |
-
"hackphyr": pipeline("text-generation",
|
14 |
-
model=AutoModelForCausalLM.from_pretrained("hackphyr/hackphyr", device_map="auto"),
|
15 |
-
tokenizer=AutoTokenizer.from_pretrained("hackphyr/hackphyr")
|
16 |
-
),
|
17 |
-
"neox": pipeline("text-generation",
|
18 |
-
model=AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", device_map="auto"),
|
19 |
-
tokenizer=AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
20 |
-
)
|
21 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/requirements.txt
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
fastapi
|
2 |
-
uvicorn
|
3 |
-
streamlit
|
4 |
-
transformers
|
5 |
-
torch
|
6 |
-
requests
|
7 |
-
pydantic
|
8 |
-
Gradio
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
blackgat/spaces.yaml
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
sdk: streamlit
|
2 |
-
app_file: dashboard/dashboard.py
|
|
|
|
|
|