tags:
- autotrain
- text-generation
- text-generation-inference
- peft
- llama-3
- finance
- crypto
- agents
- workflow-automation
- soul-ai
library_name: transformers
base_model: meta-llama/Llama-3.1-8B
license: other
widget:
- text: Ask me something about AI agents or crypto.
- text: What kind of automation can LLMs perform?
🧠 CryptoAI — Llama 3.1 Fine-Tuned for Finance & Autonomous Agents
CryptoAI is a purpose-tuned LLM based on Meta's Llama 3.1–8B, trained on domain-specific data focused on financial logic, LLM agent workflows, and automated task generation. Designed to power on-chain AI agents, it's part of the broader CryptoAI ecosystem for monetized intelligence.
📂 Dataset Summary
This model was fine-tuned on over 10,000+ instruction-style samples simulating:
- Financial queries and tokenomics reasoning
- LLM-agent interaction patterns
- Crypto automation logic
- DeFi, trading signals, news interpretation
- Smart contract and API-triggered tasks
- Natural language prompts for dynamic workflow creation
The format follows a custom instruction-based structure optimized for reasoning tasks and agentic workflows—not just casual conversation.
See our Docs page for more info: docs.soulai.info
💻 Usage (via Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "YOUR_HF_USERNAME/YOUR_MODEL_NAME"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype="auto"
).eval()
messages = [{"role": "user", "content": "How do autonomous LLM agents work?"}]
input_ids = tokenizer.apply_chat_template(
conversation=messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output_ids = model.generate(input_ids.to("cuda"), max_new_tokens=256)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
print(response)
---
🛠️ Hugging Face Inference API Use it via API for quick tasks:
bash
Copy
Edit
curl https://api-inference.huggingface.co/models/YOUR_HF_USERNAME/YOUR_MODEL_NAME
-X POST
-d '{"inputs": "Tell me something about agent-based AI."}'
-H "Authorization: Bearer YOUR_HF_TOKEN"
🧬 Model Details
Base Model: Meta-Llama-3.1–8B
Tuning Method: PEFT / LoRA
Training Platform: 🤗 AutoTrain
Optimized For: Conversational logic, chain-of-thought, and agent workflow simulation
🔗 CryptoAI Ecosystem Integration CryptoAI is designed to plug into CryptoAI’s decentralized agent network:
Deploy agents via Agent Forge
Trigger smart contracts or APIs through LLM-generated logic
Earn revenue through tokenized usage fees in $SOUL
Run tasks autonomously while sharing fees with dataset, model, and node contributors
⚙️ Ideal Use Cases Building conversational agent front-ends (chat, Discord, IVR)
Automating repetitive financial workflows
Simulating DeFi scenarios and logic
Teaching agents how to respond to vague, ambiguous tasks with structured outputs
Integrating GPT-like intelligence with programmable smart contract logic
🔒 License This model is distributed under a restricted "other" license. Use for commercial applications or LLM training requires permission. The base Llama 3 license and Meta's terms still apply.
💡 Notes & Limitations Output may vary depending on GPU, prompt phrasing, and context.
Not suitable for high-stakes financial decision-making out-of-the-box.
Use as a base agent layer with real-time validation or approval loops.
📞 Get In Touch Want to build agents with CryptoAI or license the model?
💥 Powering the Next Wave of Agentic Intelligence CryptoAI isn't just a chatbot—it's a programmable foundation for monetized, on-chain agent workflows. Train once, deploy forever.