YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

🧠 AI-Text-Similarity-Model

A fine-tuned model on the STS Benchmark (Semantic Textual Similarity) dataset. This model computes the semantic closeness between sentence pairs using cosine similarity. It is ideal for tasks such as duplicate detection, semantic search, question-answer matching, and text clustering.


✨ Model Highlights

  • πŸ“Œ Based on sentence-transformers/paraphrase-MiniLM-L6-v2
  • πŸ” Fine-tuned on the STS Benchmark (English)
  • πŸ“ˆ Outputs cosine similarity between 0 (not similar) and 1 (very similar)
  • ⚑ Fast, lightweight, and efficient on both CPU and GPU
  • πŸ” Trained with contrastive loss using sentence embeddings

🧠 Intended Uses

  • βœ… Duplicate sentence detection
  • βœ… Semantic search engines
  • βœ… Question-Answer pair matching
  • βœ… Plagiarism detection
  • βœ… Conversational agent re-ranking
  • βœ… Text clustering and grouping based on meaning

  • 🚫 Limitations

  • ❌ Trained on English sentences only

  • ❌ Not suitable for zero-shot multilingual similarity

  • ❌ Accuracy may degrade on domain-specific or technical content

  • ❌ Slight performance dip for long sequences (>128 tokens)


πŸ‹οΈβ€β™‚οΈ Training Details

Field Value
Base Model paraphrase-MiniLM-L6-v2
Dataset stsb_multi_mt, English
Framework PyTorch with πŸ€— Transformers
Epochs 3
Batch Size 16
Max Length 128 tokens
Optimizer AdamW
Loss CrossEntropyLoss (token-level)
Device Trained on CUDA-enabled GPU

πŸ“Š Evaluation Metrics

Metric Score
Accuracy 0.82
F1-Score 0.87
Precision 0.84
Recall 0.85


πŸš€ Usage

from transformers import AutoTokenizer
from transformers import pipeline
import torch

model_name = "AmanSengar/AI-Text-Similarity-Model"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
model = BertForTokenClassification.from_pretrained(model_name)
model.eval()


# Inference
def get_similarity(text1, text2):
    emb1 = model.encode(text1, convert_to_tensor=True)
    emb2 = model.encode(text2, convert_to_tensor=True)
    score = util.cos_sim(emb1, emb2).item()
    return round(score, 4)

# Test Example
print(get_similarity("A man is eating food.", "A person is having a meal."))

  • 🧩 Quantization
  • Post-training static quantization applied using PyTorch to reduce model size and accelerate inference on edge devices.

πŸ—‚ Repository Structure

.
β”œβ”€β”€ model/               # Quantized model files
β”œβ”€β”€ tokenizer_config/    # Tokenizer and vocab files
β”œβ”€β”€ model.safensors/     # Fine-tuned model in safetensors format
β”œβ”€β”€ README.md            # Model card

🀝 Contributing

Open to improvements and feedback! Feel free to submit a pull request or open an issue if you find any bugs or want to enhance the model.

Downloads last month
4
Safetensors
Model size
22.7M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support