NilsML commited on
Commit
65c57eb
·
verified ·
1 Parent(s): b820329

Upload folder using huggingface_hub

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,666 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:46716
8
+ - loss:MultipleNegativesRankingLoss
9
+ base_model: sentence-transformers/all-MiniLM-L6-v2
10
+ widget:
11
+ - source_sentence: Electromagnetic radiation behaves like particles as well as what?
12
+ sentences:
13
+ - quantum metrology allows us to attain a measurement precision that surpasses the
14
+ classically achievable limit by using quantum characters. the metrology precision
15
+ is raised from the standard quantum limit ( sql ) to the heisenberg limit ( hl
16
+ ) by using entanglement. however, it was reported that the hl returns to the sql
17
+ in the presence of local dephasing environments under the long encoding - time
18
+ condition. we evaluate here the exact impacts of local dissipative environments
19
+ on quantum metrology, based on the ramsey interferometer. it is found that the
20
+ hl is asymptotically recovered under the long encoding - time condition for a
21
+ finite number of the probe atoms. our analysis reveals that this is essentially
22
+ due to the formation of a bound state between each atom and its environment. this
23
+ provides an avenue for experimentation to implement quantum metrology under practical
24
+ conditions via engineering of the formation of the system - environment bound
25
+ state.
26
+ - plasmons in two - dimensional electron systems with nonparabolic bands, such as
27
+ graphene, feature strong dependence on electron - electron interactions. we use
28
+ a many - body approach to relate plasmon dispersion at long wavelengths to landau
29
+ fermi - liquid interactions and quasiparticle velocity. an identical renormalization
30
+ is shown to arise for the magnetoplasmon resonance. for a model with n > > 1 fermion
31
+ species, this approach predicts a power - law dependence for plasmon frequency
32
+ vs. carrier concentration, valid in a wide range of doping densities, both high
33
+ and low. gate tunability of plasmons in graphene can be exploited to directly
34
+ probe the effects of electron - electron interaction.
35
+ - 'the study of earth - mass extrasolar planets via the radial - velocity technique
36
+ and the measurement of the potential cosmological variability of fundamental constants
37
+ call for very - high - precision spectroscopy at the level of $ \ updelta \ lambda
38
+ / \ lambda < 10 ^ { - 9 } $. wavelength accuracy is obtained by providing two
39
+ fundamental ingredients : 1 ) an absolute and information - rich wavelength source
40
+ and 2 ) the ability of the spectrograph and its data reduction of transferring
41
+ the reference scale ( wavelengths ) to a measurement scale ( detector pixels )
42
+ in a repeatable manner. the goal of this work is to improve the wavelength calibration
43
+ accuracy of the harps spectrograph by combining the absolute spectral reference
44
+ provided by the emission lines of a thorium - argon hollow - cathode lamp ( hcl
45
+ ) with the spectrally rich and precise spectral information of a fabry - p \ ''
46
+ erot - based calibration source. on the basis of calibration frames acquired each
47
+ night since the fabry - p \ '' erot etalon was installed on harps in 2011, we
48
+ construct a combined wavelength solution which fits simultaneously the thorium
49
+ emission lines and the fabry - p \ '' erot lines. the combined fit is anchored
50
+ to the absolute thorium wavelengths, which provide the ` zero - point '' of the
51
+ spectrograph, while the fabry - p \ '' erot lines are used to improve the ( spectrally
52
+ ) local precision. the obtained wavelength solution is verified for auto - consistency
53
+ and tested against a solution obtained using the harps laser - frequency comb
54
+ ( lfc ). the combined thorium + fabry - p \ '' erot wavelength solution shows
55
+ significantly better performances compared to the thorium - only calibration.
56
+ the presented techniques will therefore be used in the new harps and harps - n
57
+ pipeline, and will be exported to the espresso spectrograph.'
58
+ - source_sentence: There are several types of wetlands including marshes, swamps,
59
+ bogs, mudflats, and salt marshes. the three shared characteristics among these
60
+ types—what makes them wetlands—are their hydrology, hydrophytic vegetation, and
61
+ this?
62
+ sentences:
63
+ - we report updated measurements of branching fractions ( $ \ mathcal { b } $ )
64
+ and cp - violating charge asymmetries ( $ \ mathcal { a _ { \ rm cp } } $ ) for
65
+ charmless $ b $ decays at belle ii, which operates on or near the $ \ upsilon
66
+ $ ( 4s ) resonance at the superkekb asymmetric energy $ e ^ { + } e ^ { - } $
67
+ collider. we use samples of 2019 and 2020 data corresponding to 62. 8 fb $ ^ {
68
+ - 1 } $ of integrated luminosity. the samples are analysed using two - dimensional
69
+ fits in $ \ delta e $ and $ m _ { \ it bc } $ to determine signal yields of approximately
70
+ 568, 103, and 115 decays for the channels $ b ^ 0 \ to k ^ + \ pi ^ - $, $ b ^
71
+ + \ to k _ { \ rm s } ^ 0 \ pi ^ + $, and $ b ^ 0 \ to \ pi ^ + \ pi ^ - $, respectively.
72
+ signal yields are corrected for efficiencies determined from simulation and control
73
+ data samples to obtain branching fractions and cp - violating asymmetries for
74
+ flavour - specific channels. the results are compatible with known determinations
75
+ and contribute important information to an early assessment of belle ii detector
76
+ performance.
77
+ - ') – characterised by its brown colour. health and environmental concerns associated
78
+ with electronics assembly have gained increased attention in recent years, especially
79
+ for products destined to go to european markets. electrical components are generally
80
+ mounted in the following ways : through - hole ( sometimes referred to as '' pin
81
+ - through - hole '' ) surface mount chassis mount rack mount lga / bga / pga socket
82
+ = = industry = = the electronics industry consists of various sectors. the central
83
+ driving force behind the entire electronics industry is the semiconductor industry
84
+ sector, which has annual sales of over $ 481 billion as of 2018. the largest industry
85
+ sector is e - commerce, which generated over $ 29 trillion in 2017. the most widely
86
+ manufactured electronic device is the metal - oxide - semiconductor field - effect
87
+ transistor ( mosfet ), with an estimated 13 sextillion mosfets having been manufactured
88
+ between 1960 and 2018. in the 1960s, u. s. manufacturers were unable to compete
89
+ with japanese companies such as sony and hitachi who could produce high - quality
90
+ goods at lower prices. by the 1980s, however, u. s. manufacturers became the world
91
+ leaders in semiconductor development and assembly. however, during the 1990s and
92
+ subsequently, the industry shifted overwhelmingly to east asia ( a process begun
93
+ with the initial movement of microchip mass - production there in the 1970s ),
94
+ as plentiful, cheap labor, and increasing technological sophistication, became
95
+ widely available there. over three decades, the united states '' global share
96
+ of semiconductor manufacturing capacity fell, from 37 % in 1990, to 12 % in 2022.
97
+ america '' s pre - eminent semiconductor manufacturer, intel corporation, fell
98
+ far behind its subcontractor taiwan semiconductor manufacturing company ( tsmc
99
+ ) in manufacturing technology. by that time, taiwan had become the world '' s
100
+ leading source of advanced semiconductors — followed by south korea, the united
101
+ states, japan, singapore, and china. important semiconductor industry facilities
102
+ ( which often are subsidiaries of a leading producer based elsewhere ) also exist
103
+ in europe ( notably the netherlands ), southeast asia, south america, and israel.
104
+ = = see also = = = = references = = = = further reading = = horowitz, paul ; hill,
105
+ winfield ( 1980 ). the art of electronics. cambridge university press. isbn 978
106
+ - 0521370950. mims, forrest m. ( 2003 ). getting started in electronics. master
107
+ publishing, incorporated. isbn 978 - 0 - 945053 - 28 - 6. = = external links =
108
+ = navy 1998 navy electricity and electronics'
109
+ - 'we construct two - band topological semimetals in four dimensions using the unstable
110
+ homotopy of maps from the three - torus $ t ^ 3 $ ( brillouin zone of a 3d crystal
111
+ ) to the two - sphere $ s ^ 2 $. dubbed ` ` hopf semimetals '' '', these gapless
112
+ phases generically host nodal lines, with a surface enclosing such a nodal line
113
+ in the four - dimensional brillouin zone carrying a hopf flux. these semimetals
114
+ show a unique class of surface states : while some three - dimensional surfaces
115
+ host gapless fermi - arc states { \ em and } drumhead states, other surfaces have
116
+ gapless fermi surfaces. gapless two - dimensional corner states are also present
117
+ at the intersection of three - dimensional surfaces.'
118
+ - source_sentence: What play several important roles in the human body?
119
+ sentences:
120
+ - the problem of ranking is a multi - billion dollar problem. in this paper we present
121
+ an overview of several production quality ranking systems. we show that due to
122
+ conflicting goals of employing the most effective machine learning models and
123
+ responding to users in real time, ranking systems have evolved into a system of
124
+ systems, where each subsystem can be viewed as a component layer. we view these
125
+ layers as being data processing, representation learning, candidate selection
126
+ and online inference. each layer employs different algorithms and tools, with
127
+ every end - to - end ranking system spanning multiple architectures. our goal
128
+ is to familiarize the general audience with a working knowledge of ranking at
129
+ scale, the tools and algorithms employed and the challenges introduced by adopting
130
+ a layered approach.
131
+ - this tutorial review provides a guiding reference to researchers who want to have
132
+ an overview of the large body of literature about graph spanners. it reviews the
133
+ current literature covering various research streams about graph spanners, such
134
+ as different formulations, sparsity and lightness results, computational complexity,
135
+ dynamic algorithms, and applications. as an additional contribution, we offer
136
+ a list of open problems on graph spanners.
137
+ - we present a perturbative correction within initiator full configuration interaction
138
+ quantum monte carlo ( i - fciqmc ). in the existing i - fciqmc algorithm, a significant
139
+ number of spawned walkers are discarded due to the initiator criteria. here we
140
+ show that these discarded walkers have a form that allows calculation of a second
141
+ - order epstein - nesbet correction, that may be accumulated in a trivial and
142
+ inexpensive manner, yet substantially improves i - fciqmc results. the correction
143
+ is applied to the hubbard model, the uniform electron gas and molecular systems.
144
+ - source_sentence: The cells in the follicle undergo physical changes and produce
145
+ a structure called a what?
146
+ sentences:
147
+ - Following ovulation, the ovarian cycle enters its luteal phase, illustrated in
148
+ Figure 43.15 and the menstrual cycle enters its secretory phase, both of which
149
+ run from about day 15 to 28. The luteal and secretory phases refer to changes
150
+ in the ruptured follicle. The cells in the follicle undergo physical changes and
151
+ produce a structure called a corpus luteum. The corpus luteum produces estrogen
152
+ and progesterone. The progesterone facilitates the regrowth of the uterine lining
153
+ and inhibits the release of further FSH and LH. The uterus is being prepared to
154
+ accept a fertilized egg, should it occur during this cycle. The inhibition of
155
+ FSH and LH prevents any further eggs and follicles from developing, while the
156
+ progesterone is elevated. The level of estrogen produced by the corpus luteum
157
+ increases to a steady level for the next few days. If no fertilized egg is implanted
158
+ into the uterus, the corpus luteum degenerates and the levels of estrogen and
159
+ progesterone decrease. The endometrium begins to degenerate as the progesterone
160
+ levels drop, initiating the next menstrual cycle. The decrease in progesterone
161
+ also allows the hypothalamus to send GnRH to the anterior pituitary, releasing
162
+ FSH and LH and starting the cycles again. Figure 43.17 visually compares the ovarian
163
+ and uterine cycles as well as the commensurate hormone levels.
164
+ - An ammeter measures the current traveling through the circuit. They are designed
165
+ to be connected to the circuit in series, and have an extremely low resistance.
166
+ If an ammeter were connected in parallel, all of the current would go through
167
+ the ammeter and very little through any other resistor. As such, it is necessary
168
+ for the ammeter to be connected in series with the resistors. This allows the
169
+ ammeter to accurately measure the current flow without causing any disruptions.
170
+ In the circuit sketched above, the ammeter is .
171
+ - ', narasimha. later he had visions of scrolls of complex mathematical content
172
+ unfolding before his eyes. he often said, " an equation for me has no meaning
173
+ unless it expresses a thought of god. " hardy cites ramanujan as remarking that
174
+ all religions seemed equally true to him. hardy further argued that ramanujan
175
+ '' s religious belief had been romanticised by westerners and overstated — in
176
+ reference to his belief, not practice — by indian biographers. at the same time,
177
+ he remarked on ramanujan '' s strict vegetarianism. similarly, in an interview
178
+ with frontline, berndt said, " many people falsely promulgate mystical powers
179
+ to ramanujan '' s mathematical thinking. it is not true. he has meticulously recorded
180
+ every result in his three notebooks, " further speculating that ramanujan worked
181
+ out intermediate results on slate that he could not afford the paper to record
182
+ more permanently. berndt reported that janaki said in 1984 that ramanujan spent
183
+ so much of his time on mathematics that he did not go to the temple, that she
184
+ and her mother often fed him because he had no time to eat, and that most of the
185
+ religious stories attributed to him originated with others. however, his orthopraxy
186
+ was not in doubt. = = mathematical achievements = = in mathematics, there is a
187
+ distinction between insight and formulating or working through a proof. ramanujan
188
+ proposed an abundance of formulae that could be investigated later in depth. g.
189
+ h. hardy said that ramanujan '' s discoveries are unusually rich and that there
190
+ is often more to them than initially meets the eye. as a byproduct of his work,
191
+ new directions of research were opened up. examples of the most intriguing of
192
+ these formulae include infinite series for π, one of which is given below : 1
193
+ π = 2 2 9801 [UNK] k = 0 ∞ ( 4 k )! ( 1103 + 26390 k ) ( k! ) 4 396 4 k. { \ displaystyle
194
+ { \ frac { 1 } { \ pi } } = { \ frac { 2 { \ sqrt { 2 } } } { 9801 } } \ sum _
195
+ { k = 0 } ^ { \ infty } { \ frac { ( 4k )! ( 1103 + 26390k ) } { ( k! ) ^ { 4
196
+ } 396 ^ { 4k } } }. } this result is based on the negative fundamental discriminant
197
+ d'
198
+ - source_sentence: What type of electrons are electrons that are not confined to the
199
+ bond between two atoms?
200
+ sentences:
201
+ - Gap genes themselves are under the effect of maternal effect genes, such as bicoid
202
+ and nanos. Gap genes also regulate each other to achieve their precise striped
203
+ expression patterns. The maternal effect is when the phenotype of offspring is
204
+ partly determined by the phenotype of its mother, irrespective of genotype. This
205
+ often occurs when the mother supplies mRNA or proteins to the egg, affecting early
206
+ development. In developing Drosophila, maternal effects include axis determination.
207
+ - the human capacity for working together and with tools builds on cognitive abilities
208
+ that, while not unique to humans, are most developed in humans both in scale and
209
+ plasticity. our capacity to engage with collaborators and with technology requires
210
+ a continuous expenditure of attentive work that we show may be understood in terms
211
+ of what is heuristically argued as ` trust ' in socio - economic fields. by adopting
212
+ a ` social physics ' of information approach, we are able to bring dimensional
213
+ analysis to bear on an anthropological - economic issue. the cognitive - economic
214
+ trade - off between group size and rate of attention to detail is the connection
215
+ between these. this allows humans to scale cooperative effort across groups, from
216
+ teams to communities, with a trade - off between group size and attention. we
217
+ show here that an accurate concept of trust follows a bipartite ` economy of work
218
+ ' model, and that this leads to correct predictions about the statistical distribution
219
+ of group sizes in society. trust is essentially a cognitive - economic issue that
220
+ depends on the memory cost of past behaviour and on the frequency of attentive
221
+ policing of intent. all this leads to the characteristic ` fractal ' structure
222
+ for human communities. the balance between attraction to some alpha attractor
223
+ and dispersion due to conflict fully explains data from all relevant sources.
224
+ the implications of our method suggest a broad applicability beyond purely social
225
+ groupings to general resource constrained interactions, e. g. in work, technology,
226
+ cybernetics, and generalized socio - economic systems of all kinds.
227
+ - we consider a long - term optimal investment problem where an investor tries to
228
+ minimize the probability of falling below a target growth rate. from a mathematical
229
+ viewpoint, this is a large deviation control problem. this problem will be shown
230
+ to relate to a risk - sensitive stochastic control problem for a sufficiently
231
+ large time horizon. indeed, in our theorem we state a duality in the relation
232
+ between the above two problems. furthermore, under a multidimensional linear gaussian
233
+ model we obtain explicit solutions for the primal problem.
234
+ pipeline_tag: sentence-similarity
235
+ library_name: sentence-transformers
236
+ metrics:
237
+ - cosine_accuracy@1
238
+ - cosine_accuracy@3
239
+ - cosine_accuracy@5
240
+ - cosine_accuracy@10
241
+ - cosine_precision@1
242
+ - cosine_precision@3
243
+ - cosine_precision@5
244
+ - cosine_precision@10
245
+ - cosine_recall@1
246
+ - cosine_recall@3
247
+ - cosine_recall@5
248
+ - cosine_recall@10
249
+ - cosine_ndcg@10
250
+ - cosine_mrr@10
251
+ - cosine_map@100
252
+ model-index:
253
+ - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
254
+ results:
255
+ - task:
256
+ type: information-retrieval
257
+ name: Information Retrieval
258
+ dataset:
259
+ name: sciq eval
260
+ type: sciq-eval
261
+ metrics:
262
+ - type: cosine_accuracy@1
263
+ value: 0.647
264
+ name: Cosine Accuracy@1
265
+ - type: cosine_accuracy@3
266
+ value: 0.751
267
+ name: Cosine Accuracy@3
268
+ - type: cosine_accuracy@5
269
+ value: 0.786
270
+ name: Cosine Accuracy@5
271
+ - type: cosine_accuracy@10
272
+ value: 0.827
273
+ name: Cosine Accuracy@10
274
+ - type: cosine_precision@1
275
+ value: 0.647
276
+ name: Cosine Precision@1
277
+ - type: cosine_precision@3
278
+ value: 0.2503333333333333
279
+ name: Cosine Precision@3
280
+ - type: cosine_precision@5
281
+ value: 0.15719999999999998
282
+ name: Cosine Precision@5
283
+ - type: cosine_precision@10
284
+ value: 0.08269999999999998
285
+ name: Cosine Precision@10
286
+ - type: cosine_recall@1
287
+ value: 0.647
288
+ name: Cosine Recall@1
289
+ - type: cosine_recall@3
290
+ value: 0.751
291
+ name: Cosine Recall@3
292
+ - type: cosine_recall@5
293
+ value: 0.786
294
+ name: Cosine Recall@5
295
+ - type: cosine_recall@10
296
+ value: 0.827
297
+ name: Cosine Recall@10
298
+ - type: cosine_ndcg@10
299
+ value: 0.735176233512708
300
+ name: Cosine Ndcg@10
301
+ - type: cosine_mrr@10
302
+ value: 0.7059130952380956
303
+ name: Cosine Mrr@10
304
+ - type: cosine_map@100
305
+ value: 0.7086971683832702
306
+ name: Cosine Map@100
307
+ ---
308
+
309
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
310
+
311
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
312
+
313
+ ## Model Details
314
+
315
+ ### Model Description
316
+ - **Model Type:** Sentence Transformer
317
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
318
+ - **Maximum Sequence Length:** 256 tokens
319
+ - **Output Dimensionality:** 384 dimensions
320
+ - **Similarity Function:** Cosine Similarity
321
+ <!-- - **Training Dataset:** Unknown -->
322
+ <!-- - **Language:** Unknown -->
323
+ <!-- - **License:** Unknown -->
324
+
325
+ ### Model Sources
326
+
327
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
328
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
329
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
330
+
331
+ ### Full Model Architecture
332
+
333
+ ```
334
+ SentenceTransformer(
335
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
336
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
337
+ (2): Normalize()
338
+ )
339
+ ```
340
+
341
+ ## Usage
342
+
343
+ ### Direct Usage (Sentence Transformers)
344
+
345
+ First install the Sentence Transformers library:
346
+
347
+ ```bash
348
+ pip install -U sentence-transformers
349
+ ```
350
+
351
+ Then you can load this model and run inference.
352
+ ```python
353
+ from sentence_transformers import SentenceTransformer
354
+
355
+ # Download from the 🤗 Hub
356
+ model = SentenceTransformer("sentence_transformers_model_id")
357
+ # Run inference
358
+ sentences = [
359
+ 'What type of electrons are electrons that are not confined to the bond between two atoms?',
360
+ "the human capacity for working together and with tools builds on cognitive abilities that, while not unique to humans, are most developed in humans both in scale and plasticity. our capacity to engage with collaborators and with technology requires a continuous expenditure of attentive work that we show may be understood in terms of what is heuristically argued as ` trust ' in socio - economic fields. by adopting a ` social physics ' of information approach, we are able to bring dimensional analysis to bear on an anthropological - economic issue. the cognitive - economic trade - off between group size and rate of attention to detail is the connection between these. this allows humans to scale cooperative effort across groups, from teams to communities, with a trade - off between group size and attention. we show here that an accurate concept of trust follows a bipartite ` economy of work ' model, and that this leads to correct predictions about the statistical distribution of group sizes in society. trust is essentially a cognitive - economic issue that depends on the memory cost of past behaviour and on the frequency of attentive policing of intent. all this leads to the characteristic ` fractal ' structure for human communities. the balance between attraction to some alpha attractor and dispersion due to conflict fully explains data from all relevant sources. the implications of our method suggest a broad applicability beyond purely social groupings to general resource constrained interactions, e. g. in work, technology, cybernetics, and generalized socio - economic systems of all kinds.",
361
+ 'we consider a long - term optimal investment problem where an investor tries to minimize the probability of falling below a target growth rate. from a mathematical viewpoint, this is a large deviation control problem. this problem will be shown to relate to a risk - sensitive stochastic control problem for a sufficiently large time horizon. indeed, in our theorem we state a duality in the relation between the above two problems. furthermore, under a multidimensional linear gaussian model we obtain explicit solutions for the primal problem.',
362
+ ]
363
+ embeddings = model.encode(sentences)
364
+ print(embeddings.shape)
365
+ # [3, 384]
366
+
367
+ # Get the similarity scores for the embeddings
368
+ similarities = model.similarity(embeddings, embeddings)
369
+ print(similarities.shape)
370
+ # [3, 3]
371
+ ```
372
+
373
+ <!--
374
+ ### Direct Usage (Transformers)
375
+
376
+ <details><summary>Click to see the direct usage in Transformers</summary>
377
+
378
+ </details>
379
+ -->
380
+
381
+ <!--
382
+ ### Downstream Usage (Sentence Transformers)
383
+
384
+ You can finetune this model on your own dataset.
385
+
386
+ <details><summary>Click to expand</summary>
387
+
388
+ </details>
389
+ -->
390
+
391
+ <!--
392
+ ### Out-of-Scope Use
393
+
394
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
395
+ -->
396
+
397
+ ## Evaluation
398
+
399
+ ### Metrics
400
+
401
+ #### Information Retrieval
402
+
403
+ * Dataset: `sciq-eval`
404
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
405
+
406
+ | Metric | Value |
407
+ |:--------------------|:-----------|
408
+ | cosine_accuracy@1 | 0.647 |
409
+ | cosine_accuracy@3 | 0.751 |
410
+ | cosine_accuracy@5 | 0.786 |
411
+ | cosine_accuracy@10 | 0.827 |
412
+ | cosine_precision@1 | 0.647 |
413
+ | cosine_precision@3 | 0.2503 |
414
+ | cosine_precision@5 | 0.1572 |
415
+ | cosine_precision@10 | 0.0827 |
416
+ | cosine_recall@1 | 0.647 |
417
+ | cosine_recall@3 | 0.751 |
418
+ | cosine_recall@5 | 0.786 |
419
+ | cosine_recall@10 | 0.827 |
420
+ | **cosine_ndcg@10** | **0.7352** |
421
+ | cosine_mrr@10 | 0.7059 |
422
+ | cosine_map@100 | 0.7087 |
423
+
424
+ <!--
425
+ ## Bias, Risks and Limitations
426
+
427
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
428
+ -->
429
+
430
+ <!--
431
+ ### Recommendations
432
+
433
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
434
+ -->
435
+
436
+ ## Training Details
437
+
438
+ ### Training Dataset
439
+
440
+ #### Unnamed Dataset
441
+
442
+ * Size: 46,716 training samples
443
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
444
+ * Approximate statistics based on the first 1000 samples:
445
+ | | sentence_0 | sentence_1 | label |
446
+ |:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
447
+ | type | string | string | float |
448
+ | details | <ul><li>min: 5 tokens</li><li>mean: 18.07 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 175.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.24</li><li>max: 1.0</li></ul> |
449
+ * Samples:
450
+ | sentence_0 | sentence_1 | label |
451
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
452
+ | <code>What occurs when a former inhabited area gets disturbed?</code> | <code>recent approaches to improving the extraction of text embeddings from autoregressive large language models ( llms ) have largely focused on improvements to data, backbone pretrained language models, or improving task - differentiation via instructions. in this work, we address an architectural limitation of autoregressive models : token embeddings cannot contain information from tokens that appear later in the input. to address this limitation, we propose a simple approach, " echo embeddings, " in which we repeat the input twice in context and extract embeddings from the second occurrence. we show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high - quality llms for embeddings. on the mteb leaderboard, echo embeddings improve over classical embeddings by over 9 % zero - shot and by around 0. 7 % when fine - tuned. echo embeddings with a mistral - 7b model achieve state - of - the - art compared to prior open source mod...</code> | <code>0.0</code> |
453
+ | <code>Veins subdivide repeatedly and branch throughout what?</code> | <code>the notion of generalization has moved away from the classical one defined in statistical learning theory towards an emphasis on out - of - domain generalization ( oodg ). recently, there is a growing focus on inductive generalization, where a progression of difficulty implicitly governs the direction of domain shifts. in inductive generalization, it is often assumed that the training data lie in the easier side, while the testing data lie in the harder side. the challenge is that training data are always finite, but a learner is expected to infer an inductive principle that could be applied in an unbounded manner. this emerging regime has appeared in the literature under different names, such as length / logical / algorithmic extrapolation, but a formal definition is lacking. this work provides such a formalization that centers on the concept of model successors. then we outline directions to adapt well - established techniques towards the learning of model successors. this work calls...</code> | <code>0.0</code> |
454
+ | <code>What is the term for physicians and scientists who research and develop vaccines and treat and study conditions ranging from allergies to aids?</code> | <code>we generalize the hierarchy construction to generic 2 + 1d topological orders ( which can be non - abelian ) by condensing abelian anyons in one topological order to construct a new one. we show that such construction is reversible and leads to a new equivalence relation between topological orders. we refer to the corresponding equivalent class ( the orbit of the hierarchy construction ) as " the non - abelian family ". each non - abelian family has one or a few root topological orders with the smallest number of anyon types. all the abelian topological orders belong to the trivial non - abelian family whose root is the trivial topological order. we show that abelian anyons in root topological orders must be bosons or fermions with trivial mutual statistics between them. the classification of topological orders is then greatly simplified, by focusing on the roots of each family : those roots are given by non - abelian modular extensions of representation categories of abelian groups.</code> | <code>0.0</code> |
455
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
456
+ ```json
457
+ {
458
+ "scale": 20.0,
459
+ "similarity_fct": "cos_sim"
460
+ }
461
+ ```
462
+
463
+ ### Training Hyperparameters
464
+ #### Non-Default Hyperparameters
465
+
466
+ - `eval_strategy`: steps
467
+ - `per_device_train_batch_size`: 32
468
+ - `per_device_eval_batch_size`: 32
469
+ - `num_train_epochs`: 1
470
+ - `multi_dataset_batch_sampler`: round_robin
471
+
472
+ #### All Hyperparameters
473
+ <details><summary>Click to expand</summary>
474
+
475
+ - `overwrite_output_dir`: False
476
+ - `do_predict`: False
477
+ - `eval_strategy`: steps
478
+ - `prediction_loss_only`: True
479
+ - `per_device_train_batch_size`: 32
480
+ - `per_device_eval_batch_size`: 32
481
+ - `per_gpu_train_batch_size`: None
482
+ - `per_gpu_eval_batch_size`: None
483
+ - `gradient_accumulation_steps`: 1
484
+ - `eval_accumulation_steps`: None
485
+ - `torch_empty_cache_steps`: None
486
+ - `learning_rate`: 5e-05
487
+ - `weight_decay`: 0.0
488
+ - `adam_beta1`: 0.9
489
+ - `adam_beta2`: 0.999
490
+ - `adam_epsilon`: 1e-08
491
+ - `max_grad_norm`: 1
492
+ - `num_train_epochs`: 1
493
+ - `max_steps`: -1
494
+ - `lr_scheduler_type`: linear
495
+ - `lr_scheduler_kwargs`: {}
496
+ - `warmup_ratio`: 0.0
497
+ - `warmup_steps`: 0
498
+ - `log_level`: passive
499
+ - `log_level_replica`: warning
500
+ - `log_on_each_node`: True
501
+ - `logging_nan_inf_filter`: True
502
+ - `save_safetensors`: True
503
+ - `save_on_each_node`: False
504
+ - `save_only_model`: False
505
+ - `restore_callback_states_from_checkpoint`: False
506
+ - `no_cuda`: False
507
+ - `use_cpu`: False
508
+ - `use_mps_device`: False
509
+ - `seed`: 42
510
+ - `data_seed`: None
511
+ - `jit_mode_eval`: False
512
+ - `use_ipex`: False
513
+ - `bf16`: False
514
+ - `fp16`: False
515
+ - `fp16_opt_level`: O1
516
+ - `half_precision_backend`: auto
517
+ - `bf16_full_eval`: False
518
+ - `fp16_full_eval`: False
519
+ - `tf32`: None
520
+ - `local_rank`: 0
521
+ - `ddp_backend`: None
522
+ - `tpu_num_cores`: None
523
+ - `tpu_metrics_debug`: False
524
+ - `debug`: []
525
+ - `dataloader_drop_last`: False
526
+ - `dataloader_num_workers`: 0
527
+ - `dataloader_prefetch_factor`: None
528
+ - `past_index`: -1
529
+ - `disable_tqdm`: False
530
+ - `remove_unused_columns`: True
531
+ - `label_names`: None
532
+ - `load_best_model_at_end`: False
533
+ - `ignore_data_skip`: False
534
+ - `fsdp`: []
535
+ - `fsdp_min_num_params`: 0
536
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
537
+ - `tp_size`: 0
538
+ - `fsdp_transformer_layer_cls_to_wrap`: None
539
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
540
+ - `deepspeed`: None
541
+ - `label_smoothing_factor`: 0.0
542
+ - `optim`: adamw_torch
543
+ - `optim_args`: None
544
+ - `adafactor`: False
545
+ - `group_by_length`: False
546
+ - `length_column_name`: length
547
+ - `ddp_find_unused_parameters`: None
548
+ - `ddp_bucket_cap_mb`: None
549
+ - `ddp_broadcast_buffers`: False
550
+ - `dataloader_pin_memory`: True
551
+ - `dataloader_persistent_workers`: False
552
+ - `skip_memory_metrics`: True
553
+ - `use_legacy_prediction_loop`: False
554
+ - `push_to_hub`: False
555
+ - `resume_from_checkpoint`: None
556
+ - `hub_model_id`: None
557
+ - `hub_strategy`: every_save
558
+ - `hub_private_repo`: None
559
+ - `hub_always_push`: False
560
+ - `gradient_checkpointing`: False
561
+ - `gradient_checkpointing_kwargs`: None
562
+ - `include_inputs_for_metrics`: False
563
+ - `include_for_metrics`: []
564
+ - `eval_do_concat_batches`: True
565
+ - `fp16_backend`: auto
566
+ - `push_to_hub_model_id`: None
567
+ - `push_to_hub_organization`: None
568
+ - `mp_parameters`:
569
+ - `auto_find_batch_size`: False
570
+ - `full_determinism`: False
571
+ - `torchdynamo`: None
572
+ - `ray_scope`: last
573
+ - `ddp_timeout`: 1800
574
+ - `torch_compile`: False
575
+ - `torch_compile_backend`: None
576
+ - `torch_compile_mode`: None
577
+ - `include_tokens_per_second`: False
578
+ - `include_num_input_tokens_seen`: False
579
+ - `neftune_noise_alpha`: None
580
+ - `optim_target_modules`: None
581
+ - `batch_eval_metrics`: False
582
+ - `eval_on_start`: False
583
+ - `use_liger_kernel`: False
584
+ - `eval_use_gather_object`: False
585
+ - `average_tokens_across_devices`: False
586
+ - `prompts`: None
587
+ - `batch_sampler`: batch_sampler
588
+ - `multi_dataset_batch_sampler`: round_robin
589
+
590
+ </details>
591
+
592
+ ### Training Logs
593
+ | Epoch | Step | Training Loss | sciq-eval_cosine_ndcg@10 |
594
+ |:------:|:----:|:-------------:|:------------------------:|
595
+ | 0.0685 | 100 | - | 0.6007 |
596
+ | 0.1370 | 200 | - | 0.7026 |
597
+ | 0.2055 | 300 | - | 0.7167 |
598
+ | 0.2740 | 400 | - | 0.7195 |
599
+ | 0.3425 | 500 | 2.8082 | 0.7150 |
600
+ | 0.4110 | 600 | - | 0.7292 |
601
+ | 0.4795 | 700 | - | 0.7356 |
602
+ | 0.5479 | 800 | - | 0.7428 |
603
+ | 0.6164 | 900 | - | 0.7399 |
604
+ | 0.6849 | 1000 | 2.6228 | 0.7339 |
605
+ | 0.7534 | 1100 | - | 0.7356 |
606
+ | 0.8219 | 1200 | - | 0.7375 |
607
+ | 0.8904 | 1300 | - | 0.7385 |
608
+ | 0.9589 | 1400 | - | 0.7351 |
609
+ | 1.0 | 1460 | - | 0.7352 |
610
+
611
+
612
+ ### Framework Versions
613
+ - Python: 3.12.8
614
+ - Sentence Transformers: 3.4.1
615
+ - Transformers: 4.51.3
616
+ - PyTorch: 2.5.1+cu124
617
+ - Accelerate: 1.3.0
618
+ - Datasets: 3.2.0
619
+ - Tokenizers: 0.21.0
620
+
621
+ ## Citation
622
+
623
+ ### BibTeX
624
+
625
+ #### Sentence Transformers
626
+ ```bibtex
627
+ @inproceedings{reimers-2019-sentence-bert,
628
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
629
+ author = "Reimers, Nils and Gurevych, Iryna",
630
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
631
+ month = "11",
632
+ year = "2019",
633
+ publisher = "Association for Computational Linguistics",
634
+ url = "https://arxiv.org/abs/1908.10084",
635
+ }
636
+ ```
637
+
638
+ #### MultipleNegativesRankingLoss
639
+ ```bibtex
640
+ @misc{henderson2017efficient,
641
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
642
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
643
+ year={2017},
644
+ eprint={1705.00652},
645
+ archivePrefix={arXiv},
646
+ primaryClass={cs.CL}
647
+ }
648
+ ```
649
+
650
+ <!--
651
+ ## Glossary
652
+
653
+ *Clearly define terms in order to be accessible across audiences.*
654
+ -->
655
+
656
+ <!--
657
+ ## Model Card Authors
658
+
659
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
660
+ -->
661
+
662
+ <!--
663
+ ## Model Card Contact
664
+
665
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
666
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.51.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.51.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:db3d302080566027b2069c6c8f8969de86b2c56aea845ad99edea18fb6e6d5f4
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 512,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff