Token Classification
Transformers
Safetensors
English
bert
ner
named-entity-recognition
text-classification
transformer
pretrained-model
huggingface
real-time-inference
efficient-nlp
micro-nlp
chatbot
information-extraction
document-understanding
search-enhancement
medical-nlp
financial-nlp
legal-nlp
general-purpose-nlp
on-device-nlp
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ metrics:
|
|
11 |
- accuracy
|
12 |
pipeline_tag: token-classification
|
13 |
library_name: transformers
|
14 |
-
new_version: v1.
|
15 |
tags:
|
16 |
- token-classification
|
17 |
- ner
|
@@ -34,257 +34,297 @@ tags:
|
|
34 |
- information-extraction
|
35 |
- search-enhancement
|
36 |
- knowledge-graph
|
37 |
-
-
|
38 |
- medical-nlp
|
39 |
-
-
|
40 |
-
- education-nlp
|
41 |
base_model:
|
42 |
- boltuix/bert-mini
|
43 |
---
|
44 |
|
|
|
45 |
|
46 |
-
**************************** UNDER CONSTRUCTION ******************************
|
47 |
-
|
48 |
-

|
49 |
|
50 |
# π EntityBERT Model π
|
51 |
|
52 |
## π Model Details
|
53 |
|
54 |
### π Description
|
55 |
-
The `boltuix/EntityBERT` model is a fine-tuned transformer for **Named Entity Recognition (NER)**, built on the
|
56 |
|
57 |
-
- **Dataset**: [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner) (
|
58 |
-
- **Entity Types**:
|
59 |
- **Training Examples**: ~115,812 | **Validation**: ~15,680 | **Test**: ~12,217
|
60 |
-
- **Domains**:
|
61 |
- **Tasks**: Sentence-level and document-level NER
|
62 |
- **Version**: v1.0
|
63 |
|
|
|
|
|
64 |
### π§ Info
|
65 |
-
- **Developer**: Boltuix
|
66 |
-
- **License**: Apache-2.0
|
67 |
-
- **Language**: English
|
68 |
-
- **Type**: Transformer-based Token Classification
|
69 |
-
- **Trained**: June 2025
|
70 |
- **Base Model**: `boltuix/bert-mini`
|
71 |
-
- **Parameters**: ~
|
|
|
72 |
|
73 |
### π Links
|
74 |
-
- **Model Repository**: [boltuix/EntityBERT](https://huggingface.co/boltuix/EntityBERT)
|
75 |
-
- **Dataset**: [boltuix/conll2025-ner](#download-instructions)
|
76 |
- **Hugging Face Docs**: [Transformers](https://huggingface.co/docs/transformers)
|
77 |
-
- **Demo**:
|
78 |
|
79 |
---
|
80 |
|
81 |
## π― Use Cases for NER
|
82 |
|
83 |
### π Direct Applications
|
84 |
-
- **Information Extraction**:
|
85 |
-
- **Chatbots & Virtual Assistants**:
|
86 |
-
- **Search Enhancement**: Enable
|
87 |
-
- **Knowledge Graphs**:
|
88 |
|
89 |
### π± Downstream Tasks
|
90 |
-
- **
|
91 |
-
- **
|
92 |
-
- **
|
93 |
-
- **Education NLP**: Parse academic events, university names, and contact details from seminar announcements.
|
94 |
|
95 |
-
|
|
|
|
|
|
|
96 |
|
97 |
-
|
|
|
98 |
|
99 |
## π οΈ Getting Started
|
100 |
|
101 |
### π§ͺ Inference Code
|
102 |
-
|
103 |
|
104 |
```python
|
105 |
-
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
106 |
-
import
|
107 |
-
from collections import defaultdict
|
108 |
|
109 |
# Load model and tokenizer
|
110 |
tokenizer = AutoTokenizer.from_pretrained("boltuix/EntityBERT")
|
111 |
model = AutoModelForTokenClassification.from_pretrained("boltuix/EntityBERT")
|
112 |
|
113 |
-
# Create NER pipeline with aggregation
|
114 |
-
nlp = pipeline("token-classification", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
|
115 |
-
|
116 |
# Input text
|
117 |
-
text =
|
118 |
-
|
119 |
-
)
|
120 |
|
121 |
# Run inference
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
entities = defaultdict(list)
|
126 |
-
for entity in ner_results:
|
127 |
-
group = entity["entity_group"]
|
128 |
-
word = entity["word"]
|
129 |
-
entities[group].append(word)
|
130 |
|
131 |
-
#
|
132 |
-
|
|
|
|
|
133 |
|
134 |
-
#
|
135 |
-
|
|
|
|
|
136 |
```
|
137 |
|
138 |
### β¨ Example Output
|
139 |
```
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
www.airmed.com -> B-url
|
151 |
```
|
152 |
|
153 |
### π οΈ Requirements
|
154 |
```bash
|
155 |
-
pip install transformers torch pandas pyarrow
|
156 |
```
|
157 |
- **Python**: 3.8+
|
158 |
-
- **Storage**: ~
|
159 |
-
- **Optional**:
|
160 |
|
161 |
---
|
162 |
|
163 |
## π§ Entity Labels
|
164 |
-
The model supports
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
|
170 |
-
|
171 |
-
|
|
172 |
-
|
|
173 |
-
| B-
|
174 |
-
|
|
175 |
-
| B-
|
176 |
-
|
|
177 |
-
| B-
|
178 |
-
|
|
179 |
-
| B-
|
180 |
-
|
|
181 |
-
| B-
|
182 |
-
|
|
183 |
-
| B-
|
184 |
-
|
|
185 |
-
| B-
|
186 |
-
|
|
187 |
-
| B-
|
188 |
-
|
|
189 |
-
| B-
|
190 |
-
| I-
|
191 |
-
|
|
192 |
-
| I-
|
193 |
-
|
|
194 |
-
| I-
|
195 |
-
|
|
196 |
-
| I-
|
197 |
-
|
|
198 |
-
| I-
|
199 |
-
|
|
200 |
-
| I-
|
201 |
-
|
|
202 |
-
| I-
|
203 |
-
|
|
204 |
-
| I-
|
205 |
-
|
|
206 |
-
| I-
|
207 |
-
|
|
208 |
-
|
209 |
-
|
210 |
-
|
211 |
-
|
212 |
-
| I-reserved1 | Reserved padding label | |
|
213 |
-
| B-reserved2 | Reserved padding label | |
|
214 |
-
| I-reserved2 | Reserved padding label | |
|
215 |
-
|
216 |
-
**Example**:
|
217 |
-
Text: `"Book a flight from Dubai to Tokyo on October 10, 2025 with Emirates."`
|
218 |
-
Tags: `[O, O, B-transport-mode, O, B-from-location, O, B-to-location, O, B-date, I-date, I-date, O, B-company]`
|
219 |
|
220 |
---
|
221 |
|
222 |
## π Performance
|
223 |
-
|
|
|
224 |
|
225 |
| Metric | Score |
|
226 |
|------------|-------|
|
227 |
-
| π― Precision | 0.
|
228 |
-
| πΈοΈ Recall | 0.
|
229 |
-
| πΆ F1 Score | 0.
|
230 |
-
| β
Accuracy | 0.
|
231 |
|
232 |
-
|
233 |
|
234 |
---
|
235 |
|
236 |
## βοΈ Training Setup
|
237 |
-
|
|
|
238 |
- **Training Time**: ~1.5 hours
|
239 |
-
- **Parameters**: ~
|
240 |
- **Optimizer**: AdamW
|
241 |
-
- **Precision**:
|
242 |
- **Batch Size**: 16
|
243 |
- **Learning Rate**: 2e-5
|
244 |
|
245 |
---
|
246 |
|
247 |
## π§ Training the Model
|
248 |
-
|
|
|
249 |
|
250 |
```python
|
251 |
-
# Install
|
252 |
-
!pip install transformers datasets tokenizers seqeval pandas pyarrow -q
|
253 |
|
254 |
-
# Disable Weights & Biases
|
255 |
import os
|
256 |
os.environ["WANDB_MODE"] = "disabled"
|
257 |
|
258 |
-
# Import libraries
|
259 |
-
|
260 |
-
from transformers import DataCollatorForTokenClassification
|
261 |
import datasets
|
262 |
-
import evaluate
|
263 |
import numpy as np
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
264 |
|
265 |
-
# Load dataset
|
266 |
-
|
|
|
|
|
|
|
|
|
|
|
267 |
|
268 |
-
#
|
269 |
-
|
|
|
|
|
270 |
|
271 |
-
#
|
|
|
272 |
all_tags = set()
|
273 |
-
for
|
274 |
-
|
275 |
-
|
276 |
-
|
277 |
tag2id = {tag: i for i, tag in enumerate(unique_tags)}
|
278 |
id2tag = {i: tag for i, tag in enumerate(unique_tags)}
|
|
|
|
|
279 |
|
280 |
-
# Convert
|
281 |
def convert_tags_to_ids(example):
|
282 |
example["ner_tags"] = [tag2id[tag] for tag in example["ner_tags"]]
|
283 |
return example
|
284 |
-
dataset = dataset.map(convert_tags_to_ids)
|
285 |
|
286 |
-
|
287 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
288 |
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
|
289 |
labels = []
|
290 |
for i, label in enumerate(examples["ner_tags"]):
|
@@ -293,49 +333,73 @@ def tokenize_and_align_labels(examples):
|
|
293 |
label_ids = []
|
294 |
for word_idx in word_ids:
|
295 |
if word_idx is None:
|
296 |
-
label_ids.append(-100)
|
297 |
elif word_idx != previous_word_idx:
|
298 |
-
label_ids.append(label[word_idx])
|
299 |
else:
|
300 |
-
label_ids.append(-100)
|
301 |
previous_word_idx = word_idx
|
302 |
labels.append(label_ids)
|
303 |
tokenized_inputs["labels"] = labels
|
304 |
return tokenized_inputs
|
305 |
|
306 |
-
|
|
|
|
|
307 |
|
308 |
-
#
|
309 |
-
|
|
|
310 |
|
311 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
312 |
args = TrainingArguments(
|
313 |
-
|
314 |
-
eval_strategy="epoch",
|
315 |
learning_rate=2e-5,
|
316 |
per_device_train_batch_size=16,
|
317 |
per_device_eval_batch_size=16,
|
318 |
-
num_train_epochs=
|
319 |
weight_decay=0.01,
|
320 |
-
fp16=True,
|
321 |
report_to="none"
|
322 |
)
|
323 |
-
|
324 |
-
# Data collator
|
325 |
data_collator = DataCollatorForTokenClassification(tokenizer)
|
326 |
|
327 |
-
#
|
328 |
metric = evaluate.load("seqeval")
|
329 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
330 |
def compute_metrics(eval_preds):
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
331 |
pred_logits, labels = eval_preds
|
332 |
pred_logits = np.argmax(pred_logits, axis=2)
|
333 |
predictions = [
|
334 |
-
[
|
335 |
for prediction, label in zip(pred_logits, labels)
|
336 |
]
|
337 |
true_labels = [
|
338 |
-
[
|
339 |
for prediction, label in zip(pred_logits, labels)
|
340 |
]
|
341 |
results = metric.compute(predictions=predictions, references=true_labels)
|
@@ -343,107 +407,164 @@ def compute_metrics(eval_preds):
|
|
343 |
"precision": results["overall_precision"],
|
344 |
"recall": results["overall_recall"],
|
345 |
"f1": results["overall_f1"],
|
346 |
-
"accuracy": results["overall_accuracy"]
|
347 |
}
|
348 |
|
349 |
-
# Initialize trainer
|
350 |
trainer = Trainer(
|
351 |
model,
|
352 |
args,
|
353 |
-
train_dataset=
|
354 |
-
eval_dataset=
|
355 |
data_collator=data_collator,
|
356 |
tokenizer=tokenizer,
|
357 |
compute_metrics=compute_metrics
|
358 |
)
|
359 |
-
|
360 |
-
# Train model
|
361 |
trainer.train()
|
362 |
|
363 |
-
# Save model
|
364 |
-
|
365 |
-
tokenizer.save_pretrained("
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
366 |
```
|
367 |
|
368 |
### π οΈ Tips
|
369 |
-
- **Hyperparameters**:
|
370 |
-
- **GPU
|
371 |
-
- **Custom
|
372 |
|
373 |
### β±οΈ Expected Training Time
|
374 |
-
- ~1.5 hours on an NVIDIA
|
375 |
|
376 |
### π Carbon Impact
|
377 |
-
-
|
378 |
-
|
379 |
-
---
|
380 |
-
|
381 |
-
## π Carbon Impact
|
382 |
-
- **Emissions**: ~40g COβeq
|
383 |
-
- **Measurement**: ML Impact tool
|
384 |
-
- **Optimization**: FP16 and efficient architecture
|
385 |
|
386 |
---
|
387 |
|
388 |
## π οΈ Installation
|
|
|
389 |
```bash
|
390 |
pip install transformers torch pandas pyarrow seqeval
|
391 |
```
|
392 |
- **Python**: 3.8+
|
393 |
-
- **Storage**: ~
|
394 |
- **Optional**: NVIDIA CUDA for GPU acceleration
|
395 |
|
396 |
### Download Instructions π₯
|
397 |
-
- **Model**: [boltuix/EntityBERT](https://huggingface.co/boltuix/EntityBERT)
|
398 |
-
- **Dataset**: [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner)
|
399 |
-
- Load with Hugging Face `datasets` or pandas.
|
400 |
|
401 |
---
|
402 |
|
403 |
## π§ͺ Evaluation Code
|
404 |
-
Evaluate
|
405 |
|
406 |
```python
|
407 |
-
from transformers import
|
|
|
|
|
408 |
|
409 |
-
# Load
|
410 |
-
|
|
|
411 |
|
412 |
# Test data
|
413 |
-
|
414 |
-
|
415 |
-
|
416 |
-
|
417 |
-
|
418 |
-
|
419 |
-
|
420 |
-
|
421 |
-
|
422 |
-
|
423 |
-
|
424 |
-
|
425 |
-
|
426 |
-
|
427 |
-
|
428 |
-
|
429 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
430 |
```
|
431 |
|
432 |
---
|
433 |
|
434 |
## π± Dataset Details
|
435 |
-
- **Entries**:
|
436 |
-
- **Size**: 6.38 MB (Parquet
|
437 |
- **Columns**: `split`, `tokens`, `ner_tags`
|
438 |
- **Splits**: Train (~115,812), Validation (~15,680), Test (~12,217)
|
439 |
-
- **NER Tags**:
|
440 |
-
- **Source**:
|
441 |
-
- **Annotations**: Expert-labeled for high accuracy
|
442 |
|
443 |
---
|
444 |
|
445 |
## π Visualizing NER Tags
|
446 |
-
|
447 |
|
448 |
```python
|
449 |
import pandas as pd
|
@@ -451,9 +572,7 @@ from collections import Counter
|
|
451 |
import matplotlib.pyplot as plt
|
452 |
|
453 |
# Load dataset
|
454 |
-
df = pd.read_parquet("
|
455 |
-
|
456 |
-
# Count tags
|
457 |
all_tags = [tag for tags in df["ner_tags"] for tag in tags]
|
458 |
tag_counts = Counter(all_tags)
|
459 |
|
@@ -475,23 +594,24 @@ plt.show()
|
|
475 |
## βοΈ Comparison to Other Models
|
476 |
| Model | Dataset | Parameters | F1 Score | Size |
|
477 |
|----------------------|--------------------|------------|----------|--------|
|
478 |
-
| **EntityBERT** | conll2025-ner | ~
|
|
|
479 |
| BERT-base-NER | CoNLL-2003 | ~110M | ~0.89 | ~400 MB|
|
480 |
| DistilBERT-NER | CoNLL-2003 | ~66M | ~0.85 | ~200 MB|
|
481 |
|
482 |
**Advantages**:
|
483 |
-
-
|
484 |
-
-
|
485 |
-
-
|
486 |
|
487 |
---
|
488 |
|
489 |
## π Community and Support
|
490 |
-
- π
|
491 |
-
- π οΈ
|
492 |
-
- π¬
|
493 |
-
- π
|
494 |
-
- π§ Contact:
|
495 |
|
496 |
---
|
497 |
|
@@ -503,6 +623,6 @@ plt.show()
|
|
503 |
---
|
504 |
|
505 |
## π
Last Updated
|
506 |
-
**June
|
507 |
|
508 |
**[Get Started Now](#getting-started)** π
|
|
|
11 |
- accuracy
|
12 |
pipeline_tag: token-classification
|
13 |
library_name: transformers
|
14 |
+
new_version: v1.1
|
15 |
tags:
|
16 |
- token-classification
|
17 |
- ner
|
|
|
34 |
- information-extraction
|
35 |
- search-enhancement
|
36 |
- knowledge-graph
|
37 |
+
- legal-nlp
|
38 |
- medical-nlp
|
39 |
+
- financial-nlp
|
|
|
40 |
base_model:
|
41 |
- boltuix/bert-mini
|
42 |
---
|
43 |
|
44 |
+
.jpg)
|
45 |
|
|
|
|
|
|
|
46 |
|
47 |
# π EntityBERT Model π
|
48 |
|
49 |
## π Model Details
|
50 |
|
51 |
### π Description
|
52 |
+
The `boltuix/EntityBERT` model is a lightweight, fine-tuned transformer for **Named Entity Recognition (NER)**, built on the `boltuix/bert-mini` base model. Optimized for efficiency, it identifies 36 entity types (e.g., people, organizations, locations, dates) in English text, making it perfect for applications like information extraction, chatbots, and search enhancement.
|
53 |
|
54 |
+
- **Dataset**: [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner) (143,709 entries, 6.38 MB)
|
55 |
+
- **Entity Types**: 36 NER tags (18 entity categories with B-/I- tags + O)
|
56 |
- **Training Examples**: ~115,812 | **Validation**: ~15,680 | **Test**: ~12,217
|
57 |
+
- **Domains**: News, user-generated content, research corpora
|
58 |
- **Tasks**: Sentence-level and document-level NER
|
59 |
- **Version**: v1.0
|
60 |
|
61 |
+
> **Note**: Dataset link is a placeholder. Replace with the correct Hugging Face URL once available.
|
62 |
+
|
63 |
### π§ Info
|
64 |
+
- **Developer**: Boltuix
|
65 |
+
- **License**: Apache-2.0
|
66 |
+
- **Language**: English
|
67 |
+
- **Type**: Transformer-based Token Classification
|
68 |
+
- **Trained**: Before June 11, 2025
|
69 |
- **Base Model**: `boltuix/bert-mini`
|
70 |
+
- **Parameters**: ~4.4M
|
71 |
+
- **Size**: ~15 MB
|
72 |
|
73 |
### π Links
|
74 |
+
- **Model Repository**: [boltuix/EntityBERT](https://huggingface.co/boltuix/EntityBERT) (placeholder, update with correct URL)
|
75 |
+
- **Dataset**: [boltuix/conll2025-ner](#download-instructions) (placeholder, update with correct URL)
|
76 |
- **Hugging Face Docs**: [Transformers](https://huggingface.co/docs/transformers)
|
77 |
+
- **Demo**: Coming Soon
|
78 |
|
79 |
---
|
80 |
|
81 |
## π― Use Cases for NER
|
82 |
|
83 |
### π Direct Applications
|
84 |
+
- **Information Extraction**: Identify names (π€ PERSON), locations (π GPE), and dates (ποΈ DATE) from articles, blogs, or reports.
|
85 |
+
- **Chatbots & Virtual Assistants**: Improve user query understanding by recognizing entities.
|
86 |
+
- **Search Enhancement**: Enable entity-based semantic search (e.g., βnews about Paris in 2025β).
|
87 |
+
- **Knowledge Graphs**: Construct structured graphs connecting entities like π’ ORG and π€ PERSON.
|
88 |
|
89 |
### π± Downstream Tasks
|
90 |
+
- **Domain Adaptation**: Fine-tune for specialized fields like medical π©Ί, legal π, or financial πΈ NER.
|
91 |
+
- **Multilingual Extensions**: Retrain for non-English languages.
|
92 |
+
- **Custom Entities**: Adapt for niche domains (e.g., product IDs, stock tickers).
|
|
|
93 |
|
94 |
+
### β Limitations
|
95 |
+
- **English-Only**: Limited to English text out-of-the-box.
|
96 |
+
- **Domain Bias**: Trained on `boltuix/conll2025-ner`, which may favor news and formal text, potentially weaker on informal or social media content.
|
97 |
+
- **Generalization**: May struggle with rare or highly contextual entities not in the dataset.
|
98 |
|
99 |
+
---
|
100 |
+
.jpg)
|
101 |
|
102 |
## π οΈ Getting Started
|
103 |
|
104 |
### π§ͺ Inference Code
|
105 |
+
Run NER with the following Python code:
|
106 |
|
107 |
```python
|
108 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
109 |
+
import torch
|
|
|
110 |
|
111 |
# Load model and tokenizer
|
112 |
tokenizer = AutoTokenizer.from_pretrained("boltuix/EntityBERT")
|
113 |
model = AutoModelForTokenClassification.from_pretrained("boltuix/EntityBERT")
|
114 |
|
|
|
|
|
|
|
115 |
# Input text
|
116 |
+
text = "Elon Musk launched Tesla in California on March 2025."
|
117 |
+
inputs = tokenizer(text, return_tensors="pt")
|
|
|
118 |
|
119 |
# Run inference
|
120 |
+
with torch.no_grad():
|
121 |
+
outputs = model(**inputs)
|
122 |
+
predictions = outputs.logits.argmax(dim=-1)
|
|
|
|
|
|
|
|
|
|
|
123 |
|
124 |
+
# Map predictions to labels
|
125 |
+
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
|
126 |
+
label_map = model.config.id2label
|
127 |
+
labels = [label_map[p.item()] for p in predictions[0]]
|
128 |
|
129 |
+
# Print results
|
130 |
+
for token, label in zip(tokens, labels):
|
131 |
+
if token not in tokenizer.all_special_tokens:
|
132 |
+
print(f"{token:15} β {label}")
|
133 |
```
|
134 |
|
135 |
### β¨ Example Output
|
136 |
```
|
137 |
+
Elon β B-PERSON
|
138 |
+
Musk β I-PERSON
|
139 |
+
launched β O
|
140 |
+
Tesla β B-ORG
|
141 |
+
in β O
|
142 |
+
California β B-GPE
|
143 |
+
on β O
|
144 |
+
March β B-DATE
|
145 |
+
2025 β I-DATE
|
146 |
+
. β O
|
|
|
147 |
```
|
148 |
|
149 |
### π οΈ Requirements
|
150 |
```bash
|
151 |
+
pip install transformers torch pandas pyarrow
|
152 |
```
|
153 |
- **Python**: 3.8+
|
154 |
+
- **Storage**: ~15 MB for model weights
|
155 |
+
- **Optional**: `seqeval` for evaluation, `cuda` for GPU acceleration
|
156 |
|
157 |
---
|
158 |
|
159 |
## π§ Entity Labels
|
160 |
+
The model supports 36 NER tags from the `boltuix/conll2025-ner` dataset, using the **BIO tagging scheme**:
|
161 |
+
- **B-**: Beginning of an entity
|
162 |
+
- **I-**: Inside of an entity
|
163 |
+
- **O**: Outside of any entity
|
164 |
+
|
165 |
+
| Tag Name | Purpose | Emoji |
|
166 |
+
|------------------|--------------------------------------------------------------------------|--------|
|
167 |
+
| O | Outside of any named entity (e.g., "the", "is") | π« |
|
168 |
+
| B-CARDINAL | Beginning of a cardinal number (e.g., "1000") | π’ |
|
169 |
+
| B-DATE | Beginning of a date (e.g., "January") | ποΈ |
|
170 |
+
| B-EVENT | Beginning of an event (e.g., "Olympics") | π |
|
171 |
+
| B-FAC | Beginning of a facility (e.g., "Eiffel Tower") | ποΈ |
|
172 |
+
| B-GPE | Beginning of a geopolitical entity (e.g., "Tokyo") | π |
|
173 |
+
| B-LANGUAGE | Beginning of a language (e.g., "Spanish") | π£οΈ |
|
174 |
+
| B-LAW | Beginning of a law or legal document (e.g., "Constitution") | π |
|
175 |
+
| B-LOC | Beginning of a non-GPE location (e.g., "Pacific Ocean") | πΊοΈ |
|
176 |
+
| B-MONEY | Beginning of a monetary value (e.g., "$100") | πΈ |
|
177 |
+
| B-NORP | Beginning of a nationality/religious/political group (e.g., "Democrat") | π³οΈ |
|
178 |
+
| B-ORDINAL | Beginning of an ordinal number (e.g., "first") | π₯ |
|
179 |
+
| B-ORG | Beginning of an organization (e.g., "Microsoft") | π’ |
|
180 |
+
| B-PERCENT | Beginning of a percentage (e.g., "50%") | π |
|
181 |
+
| B-PERSON | Beginning of a personβs name (e.g., "Elon Musk") | π€ |
|
182 |
+
| B-PRODUCT | Beginning of a product (e.g., "iPhone") | π± |
|
183 |
+
| B-QUANTITY | Beginning of a quantity (e.g., "two liters") | βοΈ |
|
184 |
+
| B-TIME | Beginning of a time (e.g., "noon") | β° |
|
185 |
+
| B-WORK_OF_ART | Beginning of a work of art (e.g., "Mona Lisa") | π¨ |
|
186 |
+
| I-CARDINAL | Inside of a cardinal number | π’ |
|
187 |
+
| I-DATE | Inside of a date (e.g., "2025" in "January 2025") | ποΈ |
|
188 |
+
| I-EVENT | Inside of an event name | π |
|
189 |
+
| I-FAC | Inside of a facility name | ποΈ |
|
190 |
+
| I-GPE | Inside of a geopolitical entity | π |
|
191 |
+
| I-LANGUAGE | Inside of a language name | π£οΈ |
|
192 |
+
| I-LAW | Inside of a legal document title | π |
|
193 |
+
| I-LOC | Inside of a location | πΊοΈ |
|
194 |
+
| I-MONEY | Inside of a monetary value | πΈ |
|
195 |
+
| I-NORP | Inside of a NORP entity | π³οΈ |
|
196 |
+
| I-ORDINAL | Inside of an ordinal number | π₯ |
|
197 |
+
| I-ORG | Inside of an organization name | π’ |
|
198 |
+
| I-PERCENT | Inside of a percentage | π |
|
199 |
+
| I-PERSON | Inside of a personβs name | π€ |
|
200 |
+
| I-PRODUCT | Inside of a product name | π± |
|
201 |
+
| I-QUANTITY | Inside of a quantity | βοΈ |
|
202 |
+
| I-TIME | Inside of a time phrase | β° |
|
203 |
+
| I-WORK_OF_ART | Inside of a work of art title | π¨ |
|
204 |
+
|
205 |
+
**Example**:
|
206 |
+
Text: `"Tesla opened in Shanghai on April 2025"`
|
207 |
+
Tags: `[B-ORG, O, O, B-GPE, O, B-DATE, I-DATE]`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
208 |
|
209 |
---
|
210 |
|
211 |
## π Performance
|
212 |
+
|
213 |
+
Evaluated on the `boltuix/conll2025-ner` test split (~12,217 examples) using `seqeval`:
|
214 |
|
215 |
| Metric | Score |
|
216 |
|------------|-------|
|
217 |
+
| π― Precision | 0.84 |
|
218 |
+
| πΈοΈ Recall | 0.86 |
|
219 |
+
| πΆ F1 Score | 0.85 |
|
220 |
+
| β
Accuracy | 0.91 |
|
221 |
|
222 |
+
*Note*: Performance may vary on different domains or text types.
|
223 |
|
224 |
---
|
225 |
|
226 |
## βοΈ Training Setup
|
227 |
+
|
228 |
+
- **Hardware**: NVIDIA GPU
|
229 |
- **Training Time**: ~1.5 hours
|
230 |
+
- **Parameters**: ~4.4M
|
231 |
- **Optimizer**: AdamW
|
232 |
+
- **Precision**: FP32
|
233 |
- **Batch Size**: 16
|
234 |
- **Learning Rate**: 2e-5
|
235 |
|
236 |
---
|
237 |
|
238 |
## π§ Training the Model
|
239 |
+
|
240 |
+
Fine-tune `boltuix/bert-mini` on the `boltuix/conll2025-ner` dataset to replicate or extend `EntityBERT`. Below is a simplified training script:
|
241 |
|
242 |
```python
|
243 |
+
# π οΈ Step 1: Install required libraries quietly
|
244 |
+
!pip install evaluate transformers datasets tokenizers seqeval pandas pyarrow -q
|
245 |
|
246 |
+
# π« Step 2: Disable Weights & Biases (WandB)
|
247 |
import os
|
248 |
os.environ["WANDB_MODE"] = "disabled"
|
249 |
|
250 |
+
# π Step 2: Import necessary libraries
|
251 |
+
import pandas as pd
|
|
|
252 |
import datasets
|
|
|
253 |
import numpy as np
|
254 |
+
from transformers import BertTokenizerFast
|
255 |
+
from transformers import DataCollatorForTokenClassification
|
256 |
+
from transformers import AutoModelForTokenClassification
|
257 |
+
from transformers import TrainingArguments, Trainer
|
258 |
+
import evaluate
|
259 |
+
from transformers import pipeline
|
260 |
+
from collections import defaultdict
|
261 |
+
import json
|
262 |
|
263 |
+
# π₯ Step 3: Load the CoNLL-2025 NER dataset from Parquet
|
264 |
+
# Download : https://huggingface.co/datasets/boltuix/conll2025-ner/blob/main/conll2025_ner.parquet
|
265 |
+
parquet_file = "conll2025_ner.parquet"
|
266 |
+
df = pd.read_parquet(parquet_file)
|
267 |
+
|
268 |
+
# π Step 4: Convert pandas DataFrame to Hugging Face Dataset
|
269 |
+
conll2025 = datasets.Dataset.from_pandas(df)
|
270 |
|
271 |
+
# π Step 5: Inspect the dataset structure
|
272 |
+
print("Dataset structure:", conll2025)
|
273 |
+
print("Dataset features:", conll2025.features)
|
274 |
+
print("First example:", conll2025[0])
|
275 |
|
276 |
+
# π·οΈ Step 6: Extract unique tags and create mappings
|
277 |
+
# Since ner_tags are strings, collect all unique tags
|
278 |
all_tags = set()
|
279 |
+
for example in conll2025:
|
280 |
+
all_tags.update(example["ner_tags"])
|
281 |
+
unique_tags = sorted(list(all_tags)) # Sort for consistency
|
282 |
+
num_tags = len(unique_tags)
|
283 |
tag2id = {tag: i for i, tag in enumerate(unique_tags)}
|
284 |
id2tag = {i: tag for i, tag in enumerate(unique_tags)}
|
285 |
+
print("Number of unique tags:", num_tags)
|
286 |
+
print("Unique tags:", unique_tags)
|
287 |
|
288 |
+
# π§ Step 7: Convert string ner_tags to indices
|
289 |
def convert_tags_to_ids(example):
|
290 |
example["ner_tags"] = [tag2id[tag] for tag in example["ner_tags"]]
|
291 |
return example
|
|
|
292 |
|
293 |
+
conll2025 = conll2025.map(convert_tags_to_ids)
|
294 |
+
|
295 |
+
# π Step 8: Split dataset based on 'split' column
|
296 |
+
dataset_dict = {
|
297 |
+
"train": conll2025.filter(lambda x: x["split"] == "train"),
|
298 |
+
"validation": conll2025.filter(lambda x: x["split"] == "validation"),
|
299 |
+
"test": conll2025.filter(lambda x: x["split"] == "test")
|
300 |
+
}
|
301 |
+
conll2025 = datasets.DatasetDict(dataset_dict)
|
302 |
+
print("Split dataset structure:", conll2025)
|
303 |
+
|
304 |
+
# πͺ Step 9: Initialize the tokenizer
|
305 |
+
tokenizer = BertTokenizerFast.from_pretrained("boltuix/bert-mini")
|
306 |
+
|
307 |
+
# π Step 10: Tokenize an example text and inspect
|
308 |
+
example_text = conll2025["train"][0]
|
309 |
+
tokenized_input = tokenizer(example_text["tokens"], is_split_into_words=True)
|
310 |
+
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
|
311 |
+
word_ids = tokenized_input.word_ids()
|
312 |
+
print("Word IDs:", word_ids)
|
313 |
+
print("Tokenized input:", tokenized_input)
|
314 |
+
print("Length of ner_tags vs input IDs:", len(example_text["ner_tags"]), len(tokenized_input["input_ids"]))
|
315 |
+
|
316 |
+
# π Step 11: Define function to tokenize and align labels
|
317 |
+
def tokenize_and_align_labels(examples, label_all_tokens=True):
|
318 |
+
"""
|
319 |
+
Tokenize inputs and align labels for NER tasks.
|
320 |
+
|
321 |
+
Args:
|
322 |
+
examples (dict): Dictionary with tokens and ner_tags.
|
323 |
+
label_all_tokens (bool): Whether to label all subword tokens.
|
324 |
+
|
325 |
+
Returns:
|
326 |
+
dict: Tokenized inputs with aligned labels.
|
327 |
+
"""
|
328 |
tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)
|
329 |
labels = []
|
330 |
for i, label in enumerate(examples["ner_tags"]):
|
|
|
333 |
label_ids = []
|
334 |
for word_idx in word_ids:
|
335 |
if word_idx is None:
|
336 |
+
label_ids.append(-100) # Special tokens get -100
|
337 |
elif word_idx != previous_word_idx:
|
338 |
+
label_ids.append(label[word_idx]) # First token of word gets label
|
339 |
else:
|
340 |
+
label_ids.append(label[word_idx] if label_all_tokens else -100) # Subwords get label or -100
|
341 |
previous_word_idx = word_idx
|
342 |
labels.append(label_ids)
|
343 |
tokenized_inputs["labels"] = labels
|
344 |
return tokenized_inputs
|
345 |
|
346 |
+
# π§ͺ Step 12: Test the tokenization and label alignment
|
347 |
+
q = tokenize_and_align_labels(conll2025["train"][0:1])
|
348 |
+
print("Tokenized and aligned example:", q)
|
349 |
|
350 |
+
# π Step 13: Print tokens and their corresponding labels
|
351 |
+
for token, label in zip(tokenizer.convert_ids_to_tokens(q["input_ids"][0]), q["labels"][0]):
|
352 |
+
print(f"{token:_<40} {label}")
|
353 |
|
354 |
+
# π§ Step 14: Apply tokenization to the entire dataset
|
355 |
+
tokenized_datasets = conll2025.map(tokenize_and_align_labels, batched=True)
|
356 |
+
|
357 |
+
# π€ Step 15: Initialize the model with the correct number of labels
|
358 |
+
model = AutoModelForTokenClassification.from_pretrained("boltuix/bert-mini", num_labels=num_tags)
|
359 |
+
|
360 |
+
# βοΈ Step 16: Set up training arguments
|
361 |
args = TrainingArguments(
|
362 |
+
"boltuix/bert-ner",
|
363 |
+
eval_strategy="epoch", # Changed evaluation_strategy to eval_strategy
|
364 |
learning_rate=2e-5,
|
365 |
per_device_train_batch_size=16,
|
366 |
per_device_eval_batch_size=16,
|
367 |
+
num_train_epochs=1,
|
368 |
weight_decay=0.01,
|
|
|
369 |
report_to="none"
|
370 |
)
|
371 |
+
# π Step 17: Initialize data collator for dynamic padding
|
|
|
372 |
data_collator = DataCollatorForTokenClassification(tokenizer)
|
373 |
|
374 |
+
# π Step 18: Load evaluation metric
|
375 |
metric = evaluate.load("seqeval")
|
376 |
|
377 |
+
# π·οΈ Step 19: Set label list and test metric computation
|
378 |
+
label_list = unique_tags
|
379 |
+
print("Label list:", label_list)
|
380 |
+
example = conll2025["train"][0]
|
381 |
+
labels = [label_list[i] for i in example["ner_tags"]]
|
382 |
+
print("Metric test:", metric.compute(predictions=[labels], references=[labels]))
|
383 |
+
|
384 |
+
# π Step 20: Define function to compute evaluation metrics
|
385 |
def compute_metrics(eval_preds):
|
386 |
+
"""
|
387 |
+
Compute precision, recall, F1, and accuracy for NER.
|
388 |
+
|
389 |
+
Args:
|
390 |
+
eval_preds (tuple): Predicted logits and true labels.
|
391 |
+
|
392 |
+
Returns:
|
393 |
+
dict: Evaluation metrics.
|
394 |
+
"""
|
395 |
pred_logits, labels = eval_preds
|
396 |
pred_logits = np.argmax(pred_logits, axis=2)
|
397 |
predictions = [
|
398 |
+
[label_list[p] for (p, l) in zip(prediction, label) if l != -100]
|
399 |
for prediction, label in zip(pred_logits, labels)
|
400 |
]
|
401 |
true_labels = [
|
402 |
+
[label_list[l] for (p, l) in zip(prediction, label) if l != -100]
|
403 |
for prediction, label in zip(pred_logits, labels)
|
404 |
]
|
405 |
results = metric.compute(predictions=predictions, references=true_labels)
|
|
|
407 |
"precision": results["overall_precision"],
|
408 |
"recall": results["overall_recall"],
|
409 |
"f1": results["overall_f1"],
|
410 |
+
"accuracy": results["overall_accuracy"],
|
411 |
}
|
412 |
|
413 |
+
# π Step 21: Initialize and train the trainer
|
414 |
trainer = Trainer(
|
415 |
model,
|
416 |
args,
|
417 |
+
train_dataset=tokenized_datasets["train"],
|
418 |
+
eval_dataset=tokenized_datasets["validation"],
|
419 |
data_collator=data_collator,
|
420 |
tokenizer=tokenizer,
|
421 |
compute_metrics=compute_metrics
|
422 |
)
|
|
|
|
|
423 |
trainer.train()
|
424 |
|
425 |
+
# πΎ Step 22: Save the fine-tuned model
|
426 |
+
model.save_pretrained("boltuix/bert-ner")
|
427 |
+
tokenizer.save_pretrained("tokenizer")
|
428 |
+
|
429 |
+
# π Step 23: Update model configuration with label mappings
|
430 |
+
id2label = {str(i): label for i, label in enumerate(label_list)}
|
431 |
+
label2id = {label: str(i) for i, label in enumerate(label_list)}
|
432 |
+
config = json.load(open("boltuix/bert-ner/config.json"))
|
433 |
+
config["id2label"] = id2label
|
434 |
+
config["label2id"] = label2id
|
435 |
+
json.dump(config, open("boltuix/bert-ner/config.json", "w"))
|
436 |
+
|
437 |
+
# π Step 24: Load the fine-tuned model
|
438 |
+
model_fine_tuned = AutoModelForTokenClassification.from_pretrained("boltuix/bert-ner")
|
439 |
+
|
440 |
+
# π οΈ Step 25: Create a pipeline for NER inference
|
441 |
+
nlp = pipeline("token-classification", model=model_fine_tuned, tokenizer=tokenizer)
|
442 |
+
|
443 |
+
# π Step 26: Perform NER on an example sentence
|
444 |
+
example = "On July 4th, 2023, President Joe Biden visited the United Nations headquarters in New York to deliver a speech about international law and donated $5 million to relief efforts."
|
445 |
+
ner_results = nlp(example)
|
446 |
+
print("NER results for first example:", ner_results)
|
447 |
+
|
448 |
+
# π Step 27: Perform NER on a property address and format output
|
449 |
+
example = "This page contains information about the property located at 1275 Kinnear Rd, Columbus, OH, 43212."
|
450 |
+
ner_results = nlp(example)
|
451 |
+
|
452 |
+
# π§Ή Step 28: Process NER results into structured entities
|
453 |
+
entities = defaultdict(list)
|
454 |
+
current_entity = ""
|
455 |
+
current_type = ""
|
456 |
+
|
457 |
+
for item in ner_results:
|
458 |
+
entity = item["entity"]
|
459 |
+
word = item["word"]
|
460 |
+
if word.startswith("##"):
|
461 |
+
current_entity += word[2:] # Handle subword tokens
|
462 |
+
elif entity.startswith("B-"):
|
463 |
+
if current_entity and current_type:
|
464 |
+
entities[current_type].append(current_entity.strip())
|
465 |
+
current_type = entity[2:].lower()
|
466 |
+
current_entity = word
|
467 |
+
elif entity.startswith("I-") and entity[2:].lower() == current_type:
|
468 |
+
current_entity += " " + word # Continue same entity
|
469 |
+
else:
|
470 |
+
if current_entity and current_type:
|
471 |
+
entities[current_type].append(current_entity.strip())
|
472 |
+
current_entity = ""
|
473 |
+
current_type = ""
|
474 |
+
|
475 |
+
# Append final entity if exists
|
476 |
+
if current_entity and current_type:
|
477 |
+
entities[current_type].append(current_entity.strip())
|
478 |
+
|
479 |
+
# π€ Step 29: Output the final JSON
|
480 |
+
final_json = dict(entities)
|
481 |
+
print("Structured NER output:")
|
482 |
+
print(json.dumps(final_json, indent=2))
|
483 |
```
|
484 |
|
485 |
### π οΈ Tips
|
486 |
+
- **Hyperparameters**: Experiment with `learning_rate` (1e-5 to 5e-5) or `num_train_epochs` (2-5).
|
487 |
+
- **GPU**: Use `fp16=True` for faster training.
|
488 |
+
- **Custom Data**: Modify the script for custom NER datasets.
|
489 |
|
490 |
### β±οΈ Expected Training Time
|
491 |
+
- ~1.5 hours on an NVIDIA GPU (e.g., T4) for ~115,812 examples, 3 epochs, batch size 16.
|
492 |
|
493 |
### π Carbon Impact
|
494 |
+
- Emissions: ~40g COβeq (estimated via ML Impact tool for 1.5 hours on GPU).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
495 |
|
496 |
---
|
497 |
|
498 |
## π οΈ Installation
|
499 |
+
|
500 |
```bash
|
501 |
pip install transformers torch pandas pyarrow seqeval
|
502 |
```
|
503 |
- **Python**: 3.8+
|
504 |
+
- **Storage**: ~15 MB for model, ~6.38 MB for dataset
|
505 |
- **Optional**: NVIDIA CUDA for GPU acceleration
|
506 |
|
507 |
### Download Instructions π₯
|
508 |
+
- **Model**: [boltuix/EntityBERT](https://huggingface.co/boltuix/EntityBERT) (placeholder, update with correct URL).
|
509 |
+
- **Dataset**: [boltuix/conll2025-ner](https://huggingface.co/datasets/boltuix/conll2025-ner) (placeholder, update with correct URL).
|
|
|
510 |
|
511 |
---
|
512 |
|
513 |
## π§ͺ Evaluation Code
|
514 |
+
Evaluate on custom data:
|
515 |
|
516 |
```python
|
517 |
+
from transformers import AutoTokenizer, AutoModelForTokenClassification
|
518 |
+
from seqeval.metrics import classification_report
|
519 |
+
import torch
|
520 |
|
521 |
+
# Load model and tokenizer
|
522 |
+
tokenizer = AutoTokenizer.from_pretrained("boltuix/EntityBERT")
|
523 |
+
model = AutoModelForTokenClassification.from_pretrained("boltuix/EntityBERT")
|
524 |
|
525 |
# Test data
|
526 |
+
texts = ["Elon Musk launched Tesla in California on March 2025."]
|
527 |
+
true_labels = [["B-PERSON", "I-PERSON", "O", "B-ORG", "O", "B-GPE", "O", "B-DATE", "I-DATE", "O"]]
|
528 |
+
|
529 |
+
pred_labels = []
|
530 |
+
for text in texts:
|
531 |
+
inputs = tokenizer(text, return_tensors="pt")
|
532 |
+
with torch.no_grad():
|
533 |
+
outputs = model(**inputs)
|
534 |
+
predictions = outputs.logits.argmax(dim=-1)[0].cpu().numpy()
|
535 |
+
tokens = tokenizer.convert_ids_to_tokens(inputs["input_ids"][0])
|
536 |
+
word_ids = inputs.word_ids(batch_index=0)
|
537 |
+
word_preds = []
|
538 |
+
previous_word_idx = None
|
539 |
+
for idx, word_idx in enumerate(word_ids):
|
540 |
+
if word_idx is None or word_idx == previous_word_idx:
|
541 |
+
continue
|
542 |
+
label = model.config.id2label[predictions[idx]]
|
543 |
+
word_preds.append(label)
|
544 |
+
previous_word_idx = word_idx
|
545 |
+
pred_labels.append(word_preds)
|
546 |
+
|
547 |
+
# Evaluate
|
548 |
+
print("Predicted:", pred_labels)
|
549 |
+
print("True :", true_labels)
|
550 |
+
print("\nπ Evaluation Report:\n")
|
551 |
+
print(classification_report(true_labels, pred_labels))
|
552 |
```
|
553 |
|
554 |
---
|
555 |
|
556 |
## π± Dataset Details
|
557 |
+
- **Entries**: 143,709
|
558 |
+
- **Size**: 6.38 MB (Parquet)
|
559 |
- **Columns**: `split`, `tokens`, `ner_tags`
|
560 |
- **Splits**: Train (~115,812), Validation (~15,680), Test (~12,217)
|
561 |
+
- **NER Tags**: 36 (18 entity types with B-/I- + O)
|
562 |
+
- **Source**: News, user-generated content, research corpora
|
|
|
563 |
|
564 |
---
|
565 |
|
566 |
## π Visualizing NER Tags
|
567 |
+
Compute tag distribution with:
|
568 |
|
569 |
```python
|
570 |
import pandas as pd
|
|
|
572 |
import matplotlib.pyplot as plt
|
573 |
|
574 |
# Load dataset
|
575 |
+
df = pd.read_parquet("conll2025_ner.parquet")
|
|
|
|
|
576 |
all_tags = [tag for tags in df["ner_tags"] for tag in tags]
|
577 |
tag_counts = Counter(all_tags)
|
578 |
|
|
|
594 |
## βοΈ Comparison to Other Models
|
595 |
| Model | Dataset | Parameters | F1 Score | Size |
|
596 |
|----------------------|--------------------|------------|----------|--------|
|
597 |
+
| **EntityBERT** | conll2025-ner | ~4.4M | 0.85 | ~15 MB |
|
598 |
+
| NeuroBERT-NER | conll2025-ner | ~11M | 0.86 | ~50 MB |
|
599 |
| BERT-base-NER | CoNLL-2003 | ~110M | ~0.89 | ~400 MB|
|
600 |
| DistilBERT-NER | CoNLL-2003 | ~66M | ~0.85 | ~200 MB|
|
601 |
|
602 |
**Advantages**:
|
603 |
+
- Ultra-lightweight (~4.4M parameters, ~15 MB)
|
604 |
+
- Competitive F1 score (0.85)
|
605 |
+
- Ideal for resource-constrained environments
|
606 |
|
607 |
---
|
608 |
|
609 |
## π Community and Support
|
610 |
+
- π Model page: [boltuix/EntityBERT](https://huggingface.co/boltuix/EntityBERT) (placeholder)
|
611 |
+
- π οΈ Issues/Contributions: Model repository (URL TBD)
|
612 |
+
- π¬ Hugging Face forums: [https://huggingface.co/discussions](https://huggingface.co/discussions)
|
613 |
+
- π Docs: [Hugging Face Transformers](https://huggingface.co/docs/transformers)
|
614 |
+
- π§ Contact: [[email protected]](mailto:[email protected])
|
615 |
|
616 |
---
|
617 |
|
|
|
623 |
---
|
624 |
|
625 |
## π
Last Updated
|
626 |
+
**June 11, 2025** β Released v1.0 with fine-tuning on `boltuix/conll2025-ner`.
|
627 |
|
628 |
**[Get Started Now](#getting-started)** π
|