lambda-1v-1B / README.md
mariusjabami's picture
Update README.md
242a3fc verified
metadata
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: transformers
license: mit
language:
  - en
  - pt
metrics:
  - accuracy
pipeline_tag: text-generation
tags:
  - education
  - logic
  - math
  - low-resource
  - transformers
  - open-source
  - causal-lm
  - lambdaindie

lambdAI — Lightweight Math & Logic Reasoning Model

lambdAI is a compact, fine-tuned language model built on top of TinyLlama-1.1B-Chat-v1.0, designed for educational reasoning tasks in both Portuguese and English. It focuses on logic, number theory, and mathematics, delivering fast performance with minimal computational requirements.

Model Architecture

  • Base Model: TinyLlama-1.1B-Chat
  • Fine-Tuning Strategy: LoRA (applied to q_proj and v_proj)
  • Quantization: 8-bit (NF4 via bnb_config)
  • Dataset: HuggingFaceH4/MATH — subset: number_theory
  • Max Tokens per Sample: 512
  • Batch Size: 20 per device
  • Epochs: 3

Example Usage (Python)

from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("lambdaindie/lambdai")
tokenizer = AutoTokenizer.from_pretrained("lambdaindie/lambdai")

input_text = "Problema: Prove que 17 é um número primo."
inputs = tokenizer(input_text, return_tensors="pt")

output = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(output[0], skip_special_tokens=True))```

About Lambda

Lambda is an indie tech startup founded by Marius Jabami in Angola, focused on AI-driven educational tools, automation, and lightweight software solutions. The lambdAI model is the first release in a planned series of educational LLMs optimized for reasoning, logic, and low-resource deployment.

Stay updated on the project at lambdaindie.github.io and huggingface.co/lambdaindie.


---

Developed with care by Marius Jabami — Powered by ambition and open source.

---