Repo-JEPA: Semantic Code Navigator (SOTA 0.90 MRR)

A Joint Embedding Predictive Architecture (JEPA) for semantic code search, trained on 411,000 real Python functions using an NVIDIA H100.

πŸ† Performance

Tested on 1,000 unseen real-world Python functions from CodeSearchNet.

Metric Result Target
MRR 0.9052 0.60
Hits@1 86.2% -
Hits@5 95.9% -
Hits@10 97.3% -
Median Rank 1.0 -

🧩 Usage (AutoModel)

from transformers import AutoModel, AutoTokenizer

# 1. Load Model
model = AutoModel.from_pretrained("uddeshya-k/RepoJepa", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/codebert-base")


# 2. Encode Code
code = "def handle_login(user): return auth.verify(user)"
code_embed = model.encode_code(**tokenizer(code, return_tensors="pt"))

# 3. Encode Query
query = "how to authenticate users?"
query_embed = model.encode_query(**tokenizer(query, return_tensors="pt"))

# 4. Search
similarity = (code_embed @ query_embed.T).item()
print(f"Similarity: {similarity:.4f}")

πŸ—οΈ Technical Details

  • Backbone: CodeBERT (RoBERTa-style)
  • Loss: VICReg (Variance-Invariance-Covariance Regularization)
  • Hardware: NVIDIA H100 PCIe (80GB VRAM)
  • Optimizer: AdamW + OneCycleLR
Downloads last month
17
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Dataset used to train uddeshya-k/RepoJepa