File size: 2,917 Bytes
400c8de |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# Paraphrase Detection Pipeline using Transformers
This repository provides a complete pipeline to fine-tune a transformer model for **Paraphrase Detection** using the PAWS dataset.
---
## Steps
### 1. Load Dataset
Load the PAWS dataset which contains pairs of sentences with labels indicating if they are paraphrases or not.
```python
from datasets import load_dataset
dataset = load_dataset("paws", "labeled_final")
```
### 2. Preprocess and Tokenize
Tokenize sentence pairs with padding and truncation.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/paraphrase-MiniLM-L6-v2")
def preprocess_function(examples):
return tokenizer(examples["sentence1"], examples["sentence2"], truncation=True, padding="max_length", max_length=128)
tokenized_datasets = dataset.map(preprocess_function, batched=True)
```
### 3. Load Model
Load a pre-trained sequence classification model suitable for paraphrase detection.
```python
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("sentence-transformers/paraphrase-MiniLM-L6-v2", num_labels=2)
```
### 4. Fine-tune the Model
Setup training arguments and fine-tune the model using the Trainer API.
```python
from transformers import TrainingArguments, Trainer
import evaluate
training_args = TrainingArguments(
output_dir="./paraphrase-detector",
evaluation_strategy="epoch",
save_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
num_train_epochs=3,
weight_decay=0.01,
load_best_model_at_end=True,
metric_for_best_model="accuracy"
)
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_preds):
logits, labels = eval_preds
predictions = logits.argmax(axis=-1)
return accuracy.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
tokenizer=tokenizer,
compute_metrics=compute_metrics,
)
trainer.train()
trainer.save_model("paraphrase-detector")
```
### 5. Evaluate
Evaluate the fine-tuned model.
```python
eval_results = trainer.evaluate()
print(eval_results)
```
### 6. Inference
Use the fine-tuned model for paraphrase detection inference.
```python
from transformers import pipeline
paraphrase_pipeline = pipeline("text-classification", model="paraphrase-detector", tokenizer=tokenizer)
example = paraphrase_pipeline({
"text": "How old are you?",
"text_pair": "What is your age?"
})
print(example)
```
---
## Requirements
- `datasets`
- `transformers`
- `evaluate`
Install dependencies with:
```bash
pip install datasets transformers evaluate
```
---
## Author
Your Name - [email protected]
---
## License
MIT License
|