Update README.md
Browse files
README.md
CHANGED
@@ -1,101 +1,84 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
-
- minishlab/potion-base-8M
|
4 |
-
language:
|
5 |
-
- en
|
6 |
library_name: model2vec
|
7 |
license: mit
|
8 |
-
model_name:
|
9 |
tags:
|
10 |
- embeddings
|
11 |
- static-embeddings
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a
|
|
|
18 |
|
|
|
19 |
|
20 |
-
## Installation
|
21 |
-
|
22 |
-
Install model2vec using pip:
|
23 |
```
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
```
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
### Using Model2Vec
|
30 |
-
|
31 |
-
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
|
32 |
-
|
33 |
-
Load this model using the `from_pretrained` method:
|
34 |
-
```python
|
35 |
-
from model2vec import StaticModel
|
36 |
-
|
37 |
-
# Load a pretrained Model2Vec model
|
38 |
-
model = StaticModel.from_pretrained("tmppmj8rbw3")
|
39 |
|
40 |
-
# Compute text embeddings
|
41 |
-
embeddings = model.encode(["Example sentence"])
|
42 |
```
|
|
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
|
47 |
-
|
48 |
-
```python
|
49 |
-
from sentence_transformers import SentenceTransformer
|
50 |
-
|
51 |
-
# Load a pretrained Sentence Transformer model
|
52 |
-
model = SentenceTransformer("tmppmj8rbw3")
|
53 |
|
54 |
-
|
55 |
-
|
|
|
56 |
```
|
57 |
|
58 |
-
|
59 |
|
60 |
-
|
|
|
|
|
|
|
61 |
|
|
|
|
|
62 |
```python
|
63 |
-
from model2vec.
|
64 |
|
65 |
-
#
|
66 |
-
|
67 |
|
68 |
-
#
|
69 |
-
|
70 |
```
|
71 |
|
72 |
-
## How it works
|
73 |
-
|
74 |
-
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
|
75 |
-
|
76 |
-
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using [SIF weighting](https://openreview.net/pdf?id=SyK00v5xx). During inference, we simply take the mean of all token embeddings occurring in a sentence.
|
77 |
-
|
78 |
-
## Additional Resources
|
79 |
-
|
80 |
-
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
|
81 |
-
- [Model2Vec Base Models](https://huggingface.co/collections/minishlab/model2vec-base-models-66fd9dd9b7c3b3c0f25ca90e)
|
82 |
-
- [Model2Vec Results](https://github.com/MinishLab/model2vec/tree/main/results)
|
83 |
-
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
|
84 |
-
- [Website](https://minishlab.github.io/)
|
85 |
-
|
86 |
-
|
87 |
## Library Authors
|
88 |
|
89 |
-
Model2Vec was developed by
|
90 |
|
91 |
## Citation
|
92 |
|
93 |
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
|
94 |
```
|
95 |
-
@
|
96 |
-
|
97 |
-
title = {Model2Vec:
|
98 |
year = {2024},
|
99 |
-
url = {https://github.com/MinishLab/model2vec}
|
100 |
}
|
101 |
```
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
2 |
library_name: model2vec
|
3 |
license: mit
|
4 |
+
model_name: tmpqsu1ee6a
|
5 |
tags:
|
6 |
- embeddings
|
7 |
- static-embeddings
|
8 |
+
datasets:
|
9 |
+
- HuggingFaceFW/fineweb-edu-llama3-annotations
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
base_model:
|
13 |
+
- minishlab/potion-base-8M
|
14 |
---
|
15 |
|
16 |
+
# potion-8m-edu-classifier Model Card
|
17 |
|
18 |
+
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a fine-tuned version of [potion-base-8m](https://huggingface.co/minishlab/potion-base-8M).
|
19 |
+
It was trained to predict educational content, analogous to how the [fineweb-edu-classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) was used to filter educational content.
|
20 |
|
21 |
+
It achieves the following performance on the evaluation split:
|
22 |
|
|
|
|
|
|
|
23 |
```
|
24 |
+
precision recall f1-score support
|
25 |
+
|
26 |
+
0 0.70 0.42 0.52 5694
|
27 |
+
1 0.75 0.86 0.80 26512
|
28 |
+
2 0.55 0.51 0.53 10322
|
29 |
+
3 0.54 0.45 0.49 3407
|
30 |
+
4 0.59 0.30 0.40 807
|
31 |
+
5 0.00 0.00 0.00 1
|
32 |
+
|
33 |
+
accuracy 0.69 46743
|
34 |
+
macro avg 0.52 0.42 0.46 46743
|
35 |
+
weighted avg 0.68 0.69 0.68 46743
|
36 |
```
|
37 |
|
38 |
+
When thresholded to a binary classifier, it achieves a macro-averaged F1-score of `0.79`. The original classifier achieves `0.81` on the same dataset, but this classifier is orders of magnitude faster on CPU.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
|
|
|
|
40 |
```
|
41 |
+
precision recall f1-score support
|
42 |
|
43 |
+
not edu 0.96 0.98 0.97 42528
|
44 |
+
edu 0.70 0.54 0.61 4215
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
+
accuracy 0.94 46743
|
47 |
+
macro avg 0.83 0.76 0.79 46743
|
48 |
+
weighted avg 0.93 0.94 0.93 46743
|
49 |
```
|
50 |
|
51 |
+
## Installation
|
52 |
|
53 |
+
Install model2vec with the inference extra using pip:
|
54 |
+
```
|
55 |
+
pip install model2vec[inference]
|
56 |
+
```
|
57 |
|
58 |
+
## Usage
|
59 |
+
Load this model using the `from_pretrained` method:
|
60 |
```python
|
61 |
+
from model2vec.inference import StaticModelPipeline
|
62 |
|
63 |
+
# Load a pretrained Model2Vec model
|
64 |
+
model = StaticModelPipeline.from_pretrained("minishlab/potion-8m-edu-classifier")
|
65 |
|
66 |
+
# Predict labels
|
67 |
+
label = model.predict(["Example sentence"])
|
68 |
```
|
69 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
70 |
## Library Authors
|
71 |
|
72 |
+
Model2Vec was developed by [Minish](https://github.com/MinishLab).
|
73 |
|
74 |
## Citation
|
75 |
|
76 |
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
|
77 |
```
|
78 |
+
@software{minishlab2024model2vec,
|
79 |
+
authors = {Stephan Tulkens, Thomas van Dongen},
|
80 |
+
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
|
81 |
year = {2024},
|
82 |
+
url = {https://github.com/MinishLab/model2vec},
|
83 |
}
|
84 |
```
|