applied-ai-018 commited on
Commit
804c12f
·
verified ·
1 Parent(s): 6144c9a

Add files using upload-large-folder tool

Browse files
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Chat Data Pipeline
2
+
3
+ This repository helps to clean, filter and deduplicate conversation datasets.
4
+
5
+ Open Source Community rules the world, so please contribute: open Pull Request or create an Issue.
6
+
7
+ Star this repository:
8
+
9
+ [![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/chat-data-pipeline?style=social)](https://github.com/AlekseyKorshuk/chat-data-pipeline)
10
+
11
+ ## Quick Start
12
+
13
+ Clone and install dependencies:
14
+
15
+ ```shell
16
+ git clone https://github.com/AlekseyKorshuk/chat-data-pipeline
17
+ cd chat-data-pipeline
18
+ pip install -r requirements.txt
19
+ ```
20
+
21
+ We will prepare very small dataset of instructions:
22
+
23
+ ```shell
24
+ python3 main.py --config_path ./experiments/tiny-example.yaml
25
+ ```
26
+
27
+ You can take a look at the YAML file to discover the structure of the config.
28
+
29
+ Initial dataset has the following structure of one sample:
30
+
31
+ ```json
32
+ {
33
+ "conversation": [
34
+ {
35
+ "content": "Explain the main differences between an alligator and a crocodile.",
36
+ "do_train": false,
37
+ "role": "User"
38
+ },
39
+ {
40
+ "content": "Alligators and crocodiles belong to the same order, Crocodilia, but they have several differences. 1) Shape of the snout: Alligators have a U-shaped, wider snout, while crocodiles have a more pointed, V-shaped snout. 2) Teeth placement: In an alligator, lower teeth are mostly hidden when its mouth is closed, while in a crocodile, the fourth lower tooth is visible even when the mouth is closed. 3) Habitat: Alligators are mostly found in freshwater habitats such as swamps and rivers, while crocodiles can be found in both freshwater and saltwater habitats. 4) Distribution: Alligators are mainly found in the southeastern United States and parts of China, whereas crocodiles have a more widespread distribution across Africa, Asia, the Americas, and Australia.",
41
+ "do_train": true,
42
+ "role": "Assistant"
43
+ }
44
+ ]
45
+ }
46
+ ```
47
+
48
+ This example could have more conversation turns: User, Assistant, User, Assistant...
49
+
50
+ As well role can be "System" at the very first item in the list.
51
+
52
+ # Custom Setup
53
+
54
+ In general, you can use this for any dataset that has a string column. Here is an example usage:
55
+
56
+ ```python
57
+ from datasets import load_dataset
58
+
59
+ from chat_data_pipeline import utils
60
+ from chat_data_pipeline.preprocessor import DataPreprocessor
61
+ from chat_data_pipeline import cleaners as cln
62
+ from chat_data_pipeline import filters as ftr
63
+
64
+ dataset = load_dataset("AlekseyKorshuk/tiny-imdb", split="train")
65
+
66
+ deduplication_config = {
67
+ 'do_deduplication': True,
68
+ 'minhash_config': {
69
+ 'ngram_size': 5,
70
+ 'num_perm': 256,
71
+ 'threshold': 0.7,
72
+ 'min_ngram_size': 5
73
+ }
74
+ }
75
+
76
+ cleaners = [cln.fix_utf8_encoding, cln.normalize_punctuation, cln.remove_empty_lines]
77
+ filters = [
78
+ utils.custom_partial(ftr.check_word_number,
79
+ min_word_threshold=0,
80
+ max_word_threshold=10000),
81
+ ]
82
+
83
+ preprocessor = DataPreprocessor(
84
+ dataset=dataset,
85
+ column_name="text",
86
+ cleaners=cleaners,
87
+ filters=filters,
88
+ deduplication_config=deduplication_config,
89
+ verbose=False,
90
+ )
91
+ preprocessed_dataset = preprocessor.run()
92
+ ```
93
+
94
+ ## Acknowledgment
95
+
96
+ This is a friendly fork of Squeakily by CarperAI, but this repository aims at conversation data, uses pandas to
97
+ speed up the pipeline and latest near deduplication.
chat_data_pipeline/cleaners.py ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import ftfy
3
+
4
+
5
+ def fix_utf8_encoding(text):
6
+ if text is None:
7
+ return ""
8
+ return ftfy.fix_text(text)
9
+
10
+
11
+ # Adapted from:
12
+ # https://github.com/bigscience-workshop/data-preparation/blob/main/preprocessing/training/01b_oscar_cleaning_and_filtering/filtering.py#L95
13
+ whitespace = {" ", " ", " ", " ", " ", " ", " ", " ", " ", " ", "", "„"}
14
+
15
+
16
+ def normalize_whitespace(text):
17
+ chars = [char if char not in whitespace else " " for char in text]
18
+ text = "".join(chars)
19
+ return text
20
+
21
+
22
+ unicode_punctuation = {
23
+ ",": ",",
24
+ "。": ".",
25
+ "、": ",",
26
+ "„": '"',
27
+ "”": '"',
28
+ "“": '"',
29
+ "«": '"',
30
+ "»": '"',
31
+ "1": '"',
32
+ "」": '"',
33
+ "「": '"',
34
+ "《": '"',
35
+ "》": '"',
36
+ "´": "'",
37
+ "∶": ":",
38
+ ":": ":",
39
+ "?": "?",
40
+ "!": "!",
41
+ "(": "(",
42
+ ")": ")",
43
+ ";": ";",
44
+ "–": "-",
45
+ "—": " - ",
46
+ ".": ". ",
47
+ "~": "~",
48
+ "’": "'",
49
+ "…": "...",
50
+ "━": "-",
51
+ "〈": "<",
52
+ "〉": ">",
53
+ "【": "[",
54
+ "】": "]",
55
+ "%": "%",
56
+ "►": "-",
57
+ }
58
+
59
+
60
+ def normalize_punctuation(text):
61
+ chars = [unicode_punctuation.get(char, char) for char in text]
62
+ text = "".join(chars)
63
+ return text
64
+
65
+
66
+ def remove_empty_lines(text):
67
+ lines = text.splitlines()
68
+ func = lambda x: not re.match(r'^\s*$', x)
69
+ filtered = filter(func, lines)
70
+ text = "\n".join(filtered)
71
+ if text is None or isinstance(text, str):
72
+ text = ""
73
+ return text
74
+
75
+
76
+ def clean_new_lines(text):
77
+ text = text.strip()
78
+ text = text.replace("\n", "")
79
+ return text
chat_data_pipeline/filters.py ADDED
@@ -0,0 +1,289 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import string
2
+
3
+ from chat_data_pipeline import utils
4
+
5
+
6
+ def check_word_number(
7
+ document,
8
+ min_word_threshold=5,
9
+ max_word_threshold=512,
10
+ dry_run=False,
11
+ ):
12
+ words = utils.get_words(document)
13
+ if dry_run:
14
+ return len(words)
15
+ return min_word_threshold <= len(words) <= max_word_threshold
16
+
17
+
18
+ def check_perplexity(
19
+ document,
20
+ kenlm_model,
21
+ min_perplexity_threshold=300,
22
+ max_perplexity_threshold=3_000,
23
+ dry_run=False,
24
+ ):
25
+ perplexity = kenlm_model.get_perplexity(document)
26
+ if dry_run:
27
+ return perplexity
28
+ return min_perplexity_threshold <= perplexity <= max_perplexity_threshold
29
+
30
+
31
+ nsfw_words = ['2g1c', '2 girls 1 cup', 'acrotomophilia', 'alabama hot pocket', 'alaskan pipeline', 'anal', 'anilingus',
32
+ 'anus', 'apeshit', 'arsehole', 'ass', 'asshole', 'assmunch', 'auto erotic', 'autoerotic', 'babeland',
33
+ 'baby batter', 'baby juice', 'ball gag', 'ball gravy', 'ball kicking', 'ball licking', 'ball sack',
34
+ 'ball sucking', 'bangbros', 'bangbus', 'bareback', 'barely legal', 'barenaked', 'bastard', 'bastardo',
35
+ 'bastinado', 'bbw', 'bdsm', 'beaner', 'beaners', 'beaver cleaver', 'beaver lips', 'beastiality',
36
+ 'bestiality', 'big black', 'big breasts', 'big knockers', 'big tits', 'bimbos', 'birdlock', 'bitch',
37
+ 'bitches', 'black cock', 'blonde action', 'blonde on blonde action', 'blowjob', 'blow job',
38
+ 'blow your load', 'blue waffle', 'blumpkin', 'bollocks', 'bondage', 'boner', 'boob', 'boobs',
39
+ 'booty call', 'brown showers', 'brunette action', 'bukkake', 'bulldyke', 'bullet vibe', 'bullshit',
40
+ 'bung hole', 'bunghole', 'busty', 'butt', 'buttcheeks', 'butthole', 'camel toe', 'camgirl', 'camslut',
41
+ 'camwhore', 'carpet muncher', 'carpetmuncher', 'chocolate rosebuds', 'cialis', 'circlejerk',
42
+ 'cleveland steamer', 'clit', 'clitoris', 'clover clamps', 'clusterfuck', 'cock', 'cocks', 'coprolagnia',
43
+ 'coprophilia', 'cornhole', 'coon', 'coons', 'creampie', 'cum', 'cumming', 'cumshot', 'cumshots',
44
+ 'cunnilingus', 'cunt', 'darkie', 'date rape', 'daterape', 'deep throat', 'deepthroat', 'dendrophilia',
45
+ 'dick', 'dildo', 'dingleberry', 'dingleberries', 'dirty pillows', 'dirty sanchez', 'doggie style',
46
+ 'doggiestyle', 'doggy style', 'doggystyle', 'dog style', 'dolcett', 'domination', 'dominatrix', 'dommes',
47
+ 'donkey punch', 'double dong', 'double penetration', 'dp action', 'dry hump', 'dvda', 'eat my ass',
48
+ 'ecchi', 'ejaculation', 'erotic', 'erotism', 'escort', 'eunuch', 'fag', 'faggot', 'fecal', 'felch',
49
+ 'fellatio', 'feltch', 'female squirting', 'femdom', 'figging', 'fingerbang', 'fingering', 'fisting',
50
+ 'foot fetish', 'footjob', 'frotting', 'fuck', 'fuck buttons', 'fuckin', 'fucking', 'fucktards',
51
+ 'fudge packer', 'fudgepacker', 'futanari', 'gangbang', 'gang bang', 'gay sex', 'genitals', 'giant cock',
52
+ 'girl on', 'girl on top', 'girls gone wild', 'goatcx', 'goatse', 'god damn', 'gokkun', 'golden shower',
53
+ 'goodpoop', 'goo girl', 'goregasm', 'grope', 'group sex', 'g-spot', 'guro', 'hand job', 'handjob',
54
+ 'hard core', 'hardcore', 'hentai', 'homoerotic', 'honkey', 'hooker', 'horny', 'hot carl', 'hot chick',
55
+ 'how to kill', 'how to murder', 'huge fat', 'humping', 'incest', 'intercourse', 'jack off', 'jail bait',
56
+ 'jailbait', 'jelly donut', 'jerk off', 'jigaboo', 'jiggaboo', 'jiggerboo', 'jizz', 'juggs', 'kike',
57
+ 'kinbaku', 'kinkster', 'kinky', 'knobbing', 'leather restraint', 'leather straight jacket', 'lemon party',
58
+ 'livesex', 'lolita', 'lovemaking', 'make me come', 'male squirting', 'masturbate', 'masturbating',
59
+ 'masturbation', 'menage a trois', 'milf', 'missionary position', 'mong', 'motherfucker', 'mound of venus',
60
+ 'mr hands', 'muff diver', 'muffdiving', 'nambla', 'nawashi', 'negro', 'neonazi', 'nigga', 'nigger',
61
+ 'nig nog', 'nimphomania', 'nipple', 'nipples', 'nsfw', 'nsfw images', 'nude', 'nudity', 'nutten',
62
+ 'nympho', 'nymphomania', 'octopussy', 'omorashi', 'one cup two girls', 'one guy one jar', 'orgasm',
63
+ 'orgy', 'paedophile', 'paki', 'panties', 'panty', 'pedobear', 'pedophile', 'pegging', 'penis',
64
+ 'phone sex', 'piece of shit', 'pikey', 'pissing', 'piss pig', 'pisspig', 'playboy', 'pleasure chest',
65
+ 'pole smoker', 'ponyplay', 'poof', 'poon', 'poontang', 'punany', 'poop chute', 'poopchute', 'porn',
66
+ 'porno', 'pornography', 'prince albert piercing', 'pthc', 'pubes', 'pussy', 'queaf', 'queef', 'quim',
67
+ 'raghead', 'raging boner', 'rape', 'raping', 'rapist', 'rectum', 'reverse cowgirl', 'rimjob', 'rimming',
68
+ 'rosy palm', 'rosy palm and her 5 sisters', 'rusty trombone', 'sadism', 'santorum', 'scat', 'schlong',
69
+ 'scissoring', 'semen', 'sex', 'sexcam', 'sexo', 'sexy', 'sexual', 'sexually', 'sexuality',
70
+ 'shaved beaver', 'shaved pussy', 'shemale', 'shibari', 'shit', 'shitblimp', 'shitty', 'shota',
71
+ 'shrimping', 'skeet', 'slanteye', 'slut', 's&m', 'smut', 'snatch', 'snowballing', 'sodomize', 'sodomy',
72
+ 'spastic', 'spic', 'splooge', 'splooge moose', 'spooge', 'spread legs', 'spunk', 'strap on', 'strapon',
73
+ 'strappado', 'strip club', 'style doggy', 'suck', 'sucks', 'suicide girls', 'sultry women', 'swastika',
74
+ 'swinger', 'tainted love', 'taste my', 'tea bagging', 'threesome', 'throating', 'thumbzilla', 'tied up',
75
+ 'tight white', 'tit', 'tits', 'titties', 'titty', 'tongue in a', 'topless', 'tosser', 'towelhead',
76
+ 'tranny', 'tribadism', 'tub girl', 'tubgirl', 'tushy', 'twat', 'twink', 'twinkie', 'two girls one cup',
77
+ 'undressing', 'upskirt', 'urethra play', 'urophilia', 'vagina', 'venus mound', 'viagra', 'vibrator',
78
+ 'violet wand', 'vorarephilia', 'voyeur', 'voyeurweb', 'voyuer', 'vulva', 'wank', 'wetback', 'wet dream',
79
+ 'white power', 'whore', 'worldsex', 'wrapping men', 'wrinkled starfish', 'xx', 'xxx', 'yaoi',
80
+ 'yellow showers', 'yiffy', 'zoophilia', '🖕']
81
+
82
+
83
+ def check_nsfw_words(
84
+ document,
85
+ flagged_words_threshold=0.025,
86
+ dry_run=False,
87
+ ):
88
+ document = str(document.lower())
89
+ num_words = len(utils.get_words(document))
90
+ flagged_words_ratio = 0
91
+ if num_words > 0:
92
+ num_bad_words = sum(
93
+ [document.count(bad_word) for bad_word in nsfw_words]
94
+ )
95
+ flagged_words_ratio = num_bad_words / num_words
96
+
97
+ if dry_run:
98
+ return flagged_words_ratio
99
+ return flagged_words_ratio <= flagged_words_threshold
100
+
101
+
102
+ def check_lowercase_ratio(
103
+ document,
104
+ lowercase_threshold=0.75,
105
+ dry_run=False,
106
+ ):
107
+ ascii_lowercase = string.ascii_lowercase
108
+ count = lambda l1, l2: len(list(filter(lambda c: c in l2, l1)))
109
+ letter_count = count(document, ascii_lowercase)
110
+ lowercase_ratio = letter_count / len(document) if len(document) else 0
111
+ if dry_run:
112
+ return lowercase_ratio
113
+ return lowercase_ratio >= lowercase_threshold
114
+
115
+
116
+ def check_char_repetition(
117
+ document,
118
+ char_repetition_len=10,
119
+ char_repetition_threshold=0.2,
120
+ dry_run=False,
121
+ ):
122
+ char_rep_ratio = utils.get_char_repetition_ratio(
123
+ document, char_repetition_len
124
+ )
125
+ if dry_run:
126
+ return char_rep_ratio
127
+ else:
128
+ return char_rep_ratio <= char_repetition_threshold
129
+
130
+
131
+ def check_truncation(
132
+ document,
133
+ splitter_token="<|truncation_splitter|>",
134
+ dry_run=False,
135
+ ):
136
+ model_response, edited_response = document.split(splitter_token)
137
+ is_truncation = edited_response not in model_response
138
+ if dry_run:
139
+ is_truncation = int(is_truncation)
140
+ return is_truncation
141
+
142
+
143
+ punctuations = {".", "!", "?", "*", '"', "”", "~", "…", "'", "]", ")", "`", ";"}
144
+
145
+
146
+ def check_completion(
147
+ document,
148
+ dry_run=False,
149
+ ):
150
+ document = str(document).strip()
151
+ last_char = None if len(document) == 0 else document[-1]
152
+
153
+ is_completed = last_char in punctuations
154
+ if dry_run:
155
+ is_completed = int(is_completed)
156
+ return is_completed
157
+
158
+
159
+ def check_gender(
160
+ document,
161
+ splitter_token="<|gender_splitter|>",
162
+ dry_run=False,
163
+ ):
164
+ response, edited_response = document.split(splitter_token)
165
+ gendered_words = ['he', 'she', 'him', 'her', 'girl', 'boy']
166
+ response_words = response.lower().split()
167
+ edited_words = edited_response.lower().split()
168
+ min_length = min(len(response_words), len(edited_words))
169
+ for i in range(min_length):
170
+ is_response_word_gender = response_words[i] in gendered_words
171
+ is_edited_word_gender = edited_words[i] in gendered_words
172
+ if is_response_word_gender and is_edited_word_gender and \
173
+ response_words[i] != edited_words[i]:
174
+ return True
175
+ return False
176
+
177
+
178
+ def check_empty(
179
+ document,
180
+ dry_run=False,
181
+ ):
182
+ document = document.replace("...", "")
183
+ document = document.replace("…", "")
184
+ document = document.strip()
185
+ return len(document) != 0
186
+
187
+
188
+ unwanted_words = [
189
+ "prioritize human safety"
190
+ "ethical principles"
191
+ "harmful to human beings"
192
+ "September 2021"
193
+ "as a language model",
194
+ "ethical guidelines",
195
+ "as an AI language model",
196
+ "my guidelines",
197
+ "As an AI",
198
+ "prioritize user safety",
199
+ "adhere to ethical guidelines",
200
+ "harmful consequences",
201
+ "potentially harmful",
202
+ "dangerous activities",
203
+ "promote safety",
204
+ "well-being of all users",
205
+ "responsible information sharing",
206
+ "jeopardize the safety",
207
+ "illegal actions or intentions",
208
+ "undermine the stability",
209
+ "promote the well-being",
210
+ "illegal activities or actions",
211
+ "adherence to the law",
212
+ "potentially be harmful",
213
+ "illegal substances or activities",
214
+ "committed to promoting",
215
+ "safe information",
216
+ "lawful information",
217
+ "cannot provide guidance",
218
+ "cannot provide information",
219
+ "unable to offer assistance",
220
+ "cannot engage in discussions",
221
+ "programming prohibits",
222
+ "follow ethical guidelines",
223
+ "ensure the safety",
224
+ "involves an illegal subject",
225
+ "prioritize safety",
226
+ "illegal subject",
227
+ "prioritize user well-being",
228
+ "cannot support or promote",
229
+ "activities that could harm",
230
+ "pose a risk to others",
231
+ "against my programming",
232
+ "activities that could undermine",
233
+ "potentially dangerous",
234
+ "not within the scope",
235
+ "designed to prioritize safety",
236
+ "not able to provide",
237
+ "maintain user safety",
238
+ "adhere to safety guidelines",
239
+ "dangerous or harmful",
240
+ "cannot provide any information",
241
+ "focus on promoting safety",
242
+ ]
243
+ harsh_unwanted_words = [
244
+ "i'm sorry",
245
+ "i am sorry",
246
+ "OpenAI",
247
+ "ChatGPT",
248
+ "Assistant",
249
+ "don't know",
250
+ "do not know",
251
+ "can not feel",
252
+ "can't feel",
253
+ "don't understand",
254
+ "do not understand",
255
+ "<noinput>",
256
+ "sorry",
257
+ "AI",
258
+ "language model",
259
+ "LLM",
260
+ "Artificial intelligence"
261
+ "assist",
262
+ "harm",
263
+ "help",
264
+ "welcome",
265
+ ]
266
+ unwanted_words = [unwanted_word.lower().strip() for unwanted_word in unwanted_words]
267
+ harsh_unwanted_words = [unwanted_word.lower().strip() for unwanted_word in unwanted_words + harsh_unwanted_words]
268
+
269
+
270
+ def check_ethics(
271
+ document,
272
+ dry_run=False,
273
+ ):
274
+ document = str(document.lower())
275
+ for unwanted_string in unwanted_words:
276
+ if unwanted_string in document:
277
+ return False
278
+ return True
279
+
280
+
281
+ def check_ethics_harsh(
282
+ document,
283
+ dry_run=False,
284
+ ):
285
+ document = str(document.lower())
286
+ for unwanted_string in harsh_unwanted_words:
287
+ if unwanted_string in document:
288
+ return False
289
+ return True
chat_data_pipeline/kenlm_model.py ADDED
@@ -0,0 +1,200 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Adapted from KenLM repository: https://huggingface.co/edugp/kenlm
3
+ """
4
+
5
+ import os
6
+ import re
7
+ import unicodedata
8
+
9
+ from huggingface_hub import cached_download, hf_hub_url
10
+ import sentencepiece
11
+ import kenlm
12
+ from requests.exceptions import HTTPError
13
+ from typing import Dict
14
+
15
+ KENLM_MODEL_REPO = "edugp/kenlm"
16
+
17
+
18
+ class SentencePiece:
19
+ def __init__(
20
+ self,
21
+ model: str,
22
+ ):
23
+ super().__init__()
24
+ self.sp = sentencepiece.SentencePieceProcessor()
25
+ self.sp.load(str(model))
26
+
27
+ def do(self, text: dict) -> dict:
28
+ tokenized = self.sp.encode_as_pieces(text)
29
+ return " ".join(tokenized)
30
+
31
+
32
+ class KenlmModel:
33
+ digit_re: re.Pattern = re.compile(r"\d")
34
+ unicode_punct: Dict[str, str] = {
35
+ ",": ",",
36
+ "。": ".",
37
+ "、": ",",
38
+ "„": '"',
39
+ "”": '"',
40
+ "“": '"',
41
+ "«": '"',
42
+ "»": '"',
43
+ "1": '"',
44
+ "」": '"',
45
+ "「": '"',
46
+ "《": '"',
47
+ "》": '"',
48
+ "´": "'",
49
+ "∶": ":",
50
+ ":": ":",
51
+ "?": "?",
52
+ "!": "!",
53
+ "(": "(",
54
+ ")": ")",
55
+ ";": ";",
56
+ "–": "-",
57
+ "—": " - ",
58
+ ".": ". ",
59
+ "~": "~",
60
+ "’": "'",
61
+ "…": "...",
62
+ "━": "-",
63
+ "〈": "<",
64
+ "〉": ">",
65
+ "【": "[",
66
+ "】": "]",
67
+ "%": "%",
68
+ "►": "-",
69
+ }
70
+ unicode_punct_re = re.compile(f"[{''.join(unicode_punct.keys())}]")
71
+ non_printing_chars_re = re.compile(
72
+ f"[{''.join(map(chr, list(range(0, 32)) + list(range(127, 160))))}]"
73
+ )
74
+ kenlm_model_dir = None
75
+ sentence_piece_model_dir = None
76
+
77
+ def __init__(
78
+ self,
79
+ model_dataset: str,
80
+ language: str,
81
+ lower_case: bool = False,
82
+ remove_accents: bool = False,
83
+ normalize_numbers: bool = True,
84
+ punctuation: int = 1,
85
+ ):
86
+ self.download_kenlm_model(model_dataset, language)
87
+ try:
88
+ self.model = kenlm.Model(self.kenlm_model_dir)
89
+ self.tokenizer = SentencePiece(self.sentence_piece_model_dir)
90
+ except OSError:
91
+ os.remove(self.kenlm_model_dir)
92
+ if os.path.exists(self.sentence_piece_model_dir):
93
+ os.remove(self.sentence_piece_model_dir)
94
+ raise OSError(
95
+ "File was corrupt and should have been removed. Please, retry."
96
+ )
97
+ self.accent = remove_accents
98
+ self.case = lower_case
99
+ self.numbers = normalize_numbers
100
+ self.punct = punctuation
101
+
102
+ @classmethod
103
+ def from_pretrained(
104
+ cls,
105
+ *,
106
+ model_dataset: str,
107
+ language: str,
108
+ lower_case: bool,
109
+ remove_accents: bool,
110
+ normalize_numbers: bool,
111
+ punctuation: int,
112
+ ):
113
+ return cls(
114
+ model_dataset,
115
+ language,
116
+ lower_case,
117
+ remove_accents,
118
+ normalize_numbers,
119
+ punctuation,
120
+ )
121
+
122
+ def pp(self, log_score, length):
123
+ return 10.0 ** (-log_score / length)
124
+
125
+ def get_perplexity(self, doc: str, normalize_cc_net: bool = True):
126
+ if normalize_cc_net:
127
+ doc = self.normalize(
128
+ doc,
129
+ accent=self.accent,
130
+ case=self.case,
131
+ numbers=self.numbers,
132
+ punct=self.punct,
133
+ )
134
+ # Tokenize (after normalizing): See https://github.com/facebookresearch/cc_net/blob/bda555bd1cf1ee2e0b925363e62a61cd46c8b60d/cc_net/mine.py#L352 for full pipeline
135
+ doc = self.tokenizer.do(doc)
136
+ doc_log_score, doc_length = 0, 0
137
+ for line in doc.split("\n"):
138
+ log_score = self.model.score(line)
139
+ length = len(line.split()) + 1
140
+ doc_log_score += log_score
141
+ doc_length += length
142
+ return round(self.pp(doc_log_score, doc_length), 1)
143
+
144
+ def normalize(
145
+ self,
146
+ line: str,
147
+ accent: bool = True,
148
+ case: bool = True,
149
+ numbers: bool = True,
150
+ punct: int = 1,
151
+ ) -> str:
152
+ line = line.strip()
153
+ if not line:
154
+ return line
155
+ if case:
156
+ line = line.lower()
157
+ if accent:
158
+ line = self.strip_accents(line)
159
+ if numbers:
160
+ line = self.digit_re.sub("0", line)
161
+ if punct == 1:
162
+ line = self.replace_unicode_punct(line)
163
+ elif punct == 2:
164
+ line = self.remove_unicode_punct(line)
165
+ line = self.remove_non_printing_char(line)
166
+ return line
167
+
168
+ def strip_accents(self, line: str) -> str:
169
+ """Strips accents from a piece of text."""
170
+ nfd = unicodedata.normalize("NFD", line)
171
+ output = [c for c in nfd if unicodedata.category(c) != "Mn"]
172
+ if len(output) == line:
173
+ return line
174
+ return "".join(output)
175
+
176
+ def replace_unicode_punct(self, text: str) -> str:
177
+ return "".join(self.unicode_punct.get(c, c) for c in text)
178
+
179
+ def remove_unicode_punct(self, text: str) -> str:
180
+ """More aggressive version of replace_unicode_punct but also faster."""
181
+ return self.unicode_punct_re.sub("", text)
182
+
183
+ def remove_non_printing_char(self, text: str) -> str:
184
+ return self.non_printing_chars_re.sub("", text)
185
+
186
+ def download_kenlm_model(self, model_dataset: str, language: str):
187
+ try:
188
+ kenlm_model_url = hf_hub_url(
189
+ KENLM_MODEL_REPO, filename=f"{model_dataset}/{language}.arpa.trie.bin"
190
+ )
191
+ self.kenlm_model_dir = cached_download(kenlm_model_url)
192
+ except HTTPError:
193
+ kenlm_model_url = hf_hub_url(
194
+ KENLM_MODEL_REPO, filename=f"{model_dataset}/{language}.arpa.bin"
195
+ )
196
+ self.kenlm_model_dir = cached_download(kenlm_model_url)
197
+ sentence_piece_model_url = hf_hub_url(
198
+ KENLM_MODEL_REPO, filename=f"{model_dataset}/{language}.sp.model"
199
+ )
200
+ self.sentence_piece_model_dir = cached_download(sentence_piece_model_url)
chat_data_pipeline/minhash_deduplication.py ADDED
@@ -0,0 +1,319 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Adapted from BigCode project: https://github.com/bigcode-project/bigcode-dataset/tree/main/near_deduplication
3
+ """
4
+
5
+ from __future__ import annotations
6
+
7
+ import gc
8
+ import hashlib
9
+ import multiprocessing as mp
10
+ import os
11
+ import random
12
+ import re
13
+ import struct
14
+ import time
15
+ from collections import defaultdict
16
+ from itertools import tee
17
+ from typing import Any, Dict, Iterable, List, Tuple
18
+
19
+ import numpy as np
20
+ from scipy.integrate import quad as integrate
21
+ from tqdm import tqdm
22
+
23
+ from chat_data_pipeline.pipeline import logger
24
+
25
+ SEED = 42
26
+ NON_ALPHA = re.compile("[^A-Za-z_0-9]")
27
+ RNG = np.random.RandomState(SEED)
28
+ MAX_HASH = np.uint64((1 << 32) - 1)
29
+ MERSENNE_PRIME = np.uint64((1 << 61) - 1)
30
+
31
+
32
+ def ngrams(sequence: List[str], n: int, min_ngram_size: int) -> Iterable:
33
+ """
34
+ Directly taken from nltk package to avoid dependency.
35
+
36
+ Parameters
37
+ ----------
38
+ sequence : list
39
+ The sequence of items to be n-grammed.
40
+ n : int
41
+ The order of the n-grams to be extracted.
42
+ min_ngram_size : int
43
+ The minimum size of n-grams.
44
+
45
+ Returns
46
+ -------
47
+ Iterable
48
+ The n-grams generated from the sequence.
49
+ """
50
+ if len(sequence) < min_ngram_size:
51
+ return []
52
+ iterables = tee(sequence, n)
53
+ for i, sub_iterable in enumerate(iterables):
54
+ for _ in range(i):
55
+ next(sub_iterable, None)
56
+ return zip(*iterables)
57
+
58
+
59
+ def sha1_hash32(data):
60
+ """
61
+ Directly taken from datasketch package to avoid dependency.
62
+
63
+ Parameters
64
+ ----------
65
+ data : bytes
66
+
67
+ Returns
68
+ -------
69
+ int
70
+ """
71
+ return struct.unpack("<I", hashlib.sha1(data).digest()[:4])[0]
72
+
73
+
74
+ def embed_func(
75
+ content: str,
76
+ idx: int,
77
+ *,
78
+ num_perm: int,
79
+ ngram_size: int,
80
+ hashranges: List[Tuple[int, int]],
81
+ permutations: np.ndarray,
82
+ min_ngram_size: int = 5,
83
+ ) -> Dict[str, Any]:
84
+ """
85
+ Combined with some datasketch code to better parallelize computation.
86
+
87
+ Parameters
88
+ ----------
89
+ content : str
90
+ The content to be embedded.
91
+ idx : int
92
+ The index of the content.
93
+ num_perm : int
94
+ The number of permutations.
95
+ ngram_size : int
96
+ The size of n-grams.
97
+ hashranges : List[Tuple[int, int]]
98
+ The ranges of hash values.
99
+ permutations : np.ndarray
100
+ The permutations for the minhash.
101
+ min_ngram_size : int
102
+ The minimum size of n-grams.
103
+
104
+ Returns
105
+ -------
106
+ Dict[str, Any]
107
+ The hash values in each range and the index.
108
+ """
109
+ hashvalues = np.ones(num_perm, dtype=np.uint64) * MAX_HASH
110
+ tokens = {" ".join(t) for t in ngrams(NON_ALPHA.split(content), ngram_size, min_ngram_size)}
111
+ hv = np.array([sha1_hash32(token.encode("utf-8")) for token in tokens], dtype=np.uint64) # noqa: E501
112
+ a, b = permutations
113
+ phv = np.bitwise_and(((hv * np.tile(a, (len(hv), 1)).T).T + b) % MERSENNE_PRIME, MAX_HASH) # noqa: E501
114
+ hashvalues = np.vstack([phv, hashvalues]).min(axis=0)
115
+ Hs = [bytes(hashvalues[start:end].byteswap().data) for start, end in hashranges]
116
+ return {"__signatures__": Hs, "__id__": idx}
117
+
118
+
119
+ def optimal_param(
120
+ threshold: float,
121
+ num_perm: int,
122
+ false_positive_weight: float = 0.5,
123
+ false_negative_weight: float = 0.5,
124
+ ):
125
+ """
126
+ Compute the optimal `MinHashLSH` parameter that minimizes the weighted sum
127
+ of probabilities of false positive and false negative, taken from datasketch.
128
+
129
+ Parameters
130
+ ----------
131
+ threshold : float
132
+ The threshold for similarity.
133
+ num_perm : int
134
+ The number of permutations.
135
+ false_positive_weight : float
136
+ The weight of false positive.
137
+ false_negative_weight : float
138
+ The weight of false negative.
139
+
140
+ Returns
141
+ -------
142
+ Tuple[int, int]
143
+ The optimal `b` and `r` parameters.
144
+ The number of bands, and the number of rows per band respectively.
145
+ """
146
+
147
+ def false_positive_probability(threshold: float, b: int, r: int):
148
+ """Source: `datasketch.lsh`"""
149
+
150
+ def proba(s):
151
+ return 1 - (1 - s ** float(r)) ** float(b)
152
+
153
+ a, _ = integrate(proba, 0.0, threshold)
154
+ return a
155
+
156
+ def false_negative_probability(threshold: float, b: int, r: int):
157
+ """Source: `datasketch.lsh`"""
158
+
159
+ def proba(s):
160
+ return 1 - (1 - (1 - s ** float(r)) ** float(b))
161
+
162
+ a, _ = integrate(proba, threshold, 1.0)
163
+ return a
164
+
165
+ min_error = float("inf")
166
+ opt = (0, 0)
167
+ for b in range(1, num_perm + 1):
168
+ max_r = int(num_perm / b)
169
+ for r in range(1, max_r + 1):
170
+ fp = false_positive_probability(threshold, b, r)
171
+ fn = false_negative_probability(threshold, b, r)
172
+ error = fp * false_positive_weight + fn * false_negative_weight
173
+ if error < min_error:
174
+ min_error = error
175
+ opt = (b, r)
176
+ return opt
177
+
178
+
179
+ class UnionFind:
180
+ def __init__(self):
181
+ self.parent: Dict[int, int] = {}
182
+
183
+ def find(self, x):
184
+ if x not in self.parent:
185
+ self.parent[x] = x
186
+ if self.parent[x] != x:
187
+ self.parent[x] = self.find(self.parent[x])
188
+ return self.parent[x]
189
+
190
+ def union(self, x, y):
191
+ px = self.find(x)
192
+ py = self.find(y)
193
+ self.parent[px] = self.parent[py] = min(px, py)
194
+
195
+
196
+ def prepare_dataset(dataset):
197
+ def map_func(example):
198
+ text = ""
199
+ for message in example["conversation"]:
200
+ if message["do_train"]:
201
+ text += message["content"] + "\n\n"
202
+ return {
203
+ "text": text.strip()
204
+ }
205
+
206
+ dedup_ready_dataset = dataset.map(
207
+ map_func,
208
+ num_proc=os.cpu_count(),
209
+ desc="Preparing..."
210
+ )
211
+ return dedup_ready_dataset
212
+
213
+
214
+ def deduplicate(
215
+ dataset, # noqa: E501
216
+ column="text",
217
+ ngram_size=5,
218
+ num_perm=256,
219
+ threshold=0.7,
220
+ min_ngram_size=5,
221
+ ):
222
+ mp.set_start_method("fork", force=True)
223
+ uf = UnionFind()
224
+
225
+ time_measures = {}
226
+ start_time = time.time()
227
+
228
+ B, R = optimal_param(threshold, num_perm)
229
+ HASH_RANGES = [(i * R, (i + 1) * R) for i in range(B)]
230
+ HASH_TABLES = [defaultdict(set) for _ in range(B)]
231
+
232
+ time_measures["load_dataset"] = time.time()
233
+ time_measures["load_dataset"] = time.time() - time_measures["load_dataset"]
234
+ DATA_SIZE = len(dataset)
235
+ PERMUTATIONS = np.array(
236
+ [
237
+ (
238
+ RNG.randint(1, MERSENNE_PRIME, dtype=np.uint64),
239
+ RNG.randint(0, MERSENNE_PRIME, dtype=np.uint64),
240
+ )
241
+ for _ in range(num_perm)
242
+ ],
243
+ dtype=np.uint64,
244
+ ).T
245
+
246
+ time_measures["minhash"] = time.time()
247
+ embedded = dataset.map(
248
+ function=embed_func,
249
+ fn_kwargs={
250
+ "num_perm": num_perm,
251
+ "hashranges": HASH_RANGES,
252
+ "ngram_size": ngram_size,
253
+ "permutations": PERMUTATIONS,
254
+ "min_ngram_size": min_ngram_size,
255
+ },
256
+ input_columns=[column],
257
+ remove_columns=dataset.column_names,
258
+ num_proc=os.cpu_count(),
259
+ with_indices=True,
260
+ desc="Fingerprinting...",
261
+ )
262
+ time_measures["minhash"] = time.time() - time_measures["minhash"]
263
+
264
+ time_measures["clustering"] = time.time()
265
+ batch_size: int = 10000
266
+ for i in tqdm(
267
+ range(0, len(embedded), batch_size), dynamic_ncols=True, desc="Iterating MinHashes..." # noqa: E501
268
+ ):
269
+ batch = embedded[i: i + batch_size]
270
+ for key, Hs in zip(batch["__id__"], batch["__signatures__"]):
271
+ for H, hashtable in zip(Hs, HASH_TABLES):
272
+ hashtable[H].add(key)
273
+ for table in tqdm(HASH_TABLES, dynamic_ncols=True, desc="Clustering..."):
274
+ for cluster in table.values():
275
+ if len(cluster) <= 1:
276
+ continue
277
+ idx = min(cluster)
278
+ for x in cluster:
279
+ uf.union(x, idx)
280
+ time_measures["clustering"] = time.time() - time_measures["clustering"]
281
+
282
+ time_measures["filtering"] = time.time()
283
+ gc.freeze()
284
+ gc.disable()
285
+ dataset = dataset.map(
286
+ function=lambda _, idx: {"__cluster__": uf.find(idx)},
287
+ with_indices=True,
288
+ num_proc=os.cpu_count(),
289
+ new_fingerprint=str(random.getrandbits(128)),
290
+ desc="Finding clusters...",
291
+ )
292
+ gc.enable()
293
+ gc.collect()
294
+ # This is where the deduplication happens
295
+ # Since there is no easy groupby in datasets
296
+ # I will use this simple filter for now
297
+ final_data = dataset.filter(
298
+ function=lambda record, idx: record["__cluster__"] == idx,
299
+ with_indices=True,
300
+ num_proc=os.cpu_count(),
301
+ desc="Filtering clusters...",
302
+ )
303
+ time_measures["filtering"] = time.time() - time_measures["filtering"]
304
+
305
+ FINAL_DATA_SIZE = len(final_data)
306
+ DUP_SIZE = DATA_SIZE - FINAL_DATA_SIZE
307
+ PAD = 32
308
+
309
+ for key, value in time_measures.items():
310
+ logger.info(f"{key:<{PAD}}: {value:.2f} seconds")
311
+ logger.info(f"{'Data Number (before)':<{PAD}}: {DATA_SIZE}")
312
+ logger.info(
313
+ f"{'Data Number (after)':<{PAD}}: {FINAL_DATA_SIZE} ({FINAL_DATA_SIZE / DATA_SIZE:.2%})" # noqa: E501
314
+ )
315
+ logger.info(f"{'Duplicate Number':<{PAD}}: {DUP_SIZE} ({DUP_SIZE / DATA_SIZE:.2%})") # noqa: E501
316
+ logger.info(f"{'Total Time':<{PAD}}: {time.time() - start_time:.2f} seconds")
317
+ logger.info("🤗 Happy Deduplicating 🤗")
318
+
319
+ return final_data
chat_data_pipeline/pipeline.py ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import logging
2
+
3
+ import numpy as np
4
+ from datasets import Dataset, concatenate_datasets
5
+ from rich.logging import RichHandler
6
+ import tqdm
7
+
8
+ tqdm.tqdm.pandas()
9
+
10
+ logger = logging.getLogger(__name__)
11
+ logger.setLevel(logging.INFO)
12
+ logger.addHandler(RichHandler(rich_tracebacks=True))
13
+ # Turn off logging for datasets
14
+ logging.getLogger("datasets").setLevel(logging.ERROR)
15
+
16
+
17
+ class Pipeline:
18
+ def __init__(self, datasources):
19
+ self.datasources = datasources
20
+
21
+ def run(self, dry_run=False):
22
+ for i in range(len(self.datasources)):
23
+ self.datasources[i]["dataset"] = self.datasources[i]["dataset"].to_pandas()
24
+
25
+ column_name = self.datasources[i]["columns"][0]
26
+ logger.info(f"Running datasource: {self.datasources[i]['name']}")
27
+
28
+ for cleaner_func in self.datasources[i]["cleaners"]:
29
+ self.datasources[i]["dataset"] = apply_cleaner(
30
+ self.datasources[i]["dataset"],
31
+ column_name,
32
+ cleaner_func
33
+ )
34
+
35
+ for filter_func in self.datasources[i]["filters"]:
36
+ self.datasources[i]["dataset"] = apply_filter(
37
+ self.datasources[i]["dataset"],
38
+ column_name,
39
+ filter_func,
40
+ dry_run
41
+ )
42
+ self.datasources[i]["dataset"] = smart_from_pandas(self.datasources[i]["dataset"])
43
+
44
+
45
+ def apply_cleaner(dataframe, column_name, cleaner_func):
46
+ logger.info(f"Running cleaner: {cleaner_func.__name__} on {column_name}")
47
+ func = lambda x: cleaner_func(x[column_name])
48
+ dataframe[column_name] = dataframe.progress_apply(func, axis=1)
49
+ return dataframe
50
+
51
+
52
+ def apply_filter(dataframe, column_name, filter_func, dry_run):
53
+ logger.info(f"Running filter: {filter_func.__name__} on {column_name}")
54
+ criteria_column_name = f"{column_name}_{filter_func.__name__}_criteria"
55
+ func = lambda x: filter_func(x[column_name], dry_run=dry_run)
56
+ dataframe[criteria_column_name] = dataframe.progress_apply(func, axis=1)
57
+ logger.info(f"Criteria statistics:\n{dataframe[criteria_column_name].describe()}")
58
+ if not dry_run:
59
+ func = lambda x: x[criteria_column_name]
60
+ dataframe = dataframe[dataframe.progress_apply(func, axis=1)]
61
+ dataframe = dataframe.drop(
62
+ [criteria_column_name, "__index_level_0__"],
63
+ axis=1,
64
+ errors='ignore'
65
+ )
66
+
67
+ return dataframe
68
+
69
+
70
+ def smart_from_pandas(df, chunk_size=200_000):
71
+ datasets = []
72
+ for g, batch in df.groupby(np.arange(len(df)) // chunk_size):
73
+ dataset = Dataset.from_pandas(batch, preserve_index=False)
74
+ datasets.append(dataset)
75
+ return concatenate_datasets(datasets)
chat_data_pipeline/preprocessor.py ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gc
2
+ import shutil
3
+
4
+ from datasets import Dataset, load_from_disk
5
+
6
+ from chat_data_pipeline.pipeline import logger
7
+ from chat_data_pipeline import utils
8
+ from chat_data_pipeline.minhash_deduplication import deduplicate
9
+
10
+
11
+ class DataPreprocessor:
12
+ dataset: Dataset
13
+
14
+ def __init__(
15
+ self,
16
+ dataset,
17
+ column_name,
18
+ cleaners,
19
+ filters,
20
+ deduplication_config,
21
+ dry_run=False,
22
+ verbose=False
23
+ ):
24
+ self.dataset = dataset
25
+ self.column_name = column_name
26
+ self.cleaners = cleaners
27
+ self.filters = filters
28
+ self.deduplication_config = deduplication_config
29
+ self.dry_run = dry_run
30
+ self.verbose = verbose
31
+
32
+ def run(self):
33
+ self._clean_dataset()
34
+ self._filter_dataset()
35
+ if self.deduplication_config.get("do_deduplication", False):
36
+ self._deduplicate_dataset()
37
+ return self.dataset
38
+
39
+ def _clean_dataset(self):
40
+ if len(self.cleaners) > 0:
41
+ self.dataset = utils.run_cleaner(self.dataset, self.column_name, self.cleaners)
42
+ return self.dataset
43
+
44
+ def _filter_dataset(self):
45
+ for filter_func in self.filters:
46
+ dataset_length = len(self.dataset)
47
+ ids = range(dataset_length)
48
+ self.dataset = self.dataset.add_column("ids", ids)
49
+ filtered_dataset = utils.run_filter(
50
+ dataset=self.dataset,
51
+ column_name=self.column_name,
52
+ filter_func=filter_func,
53
+ dry_run=self.dry_run
54
+ )
55
+ self._print_filter_logs(filtered_dataset, filter_func.__name__)
56
+ self.dataset = filtered_dataset.remove_columns("ids")
57
+
58
+ return self.dataset
59
+
60
+ def _deduplicate_dataset(self):
61
+ dataset_length = len(self.dataset)
62
+ ids = range(dataset_length)
63
+ self.dataset = self.dataset.add_column("ids", ids)
64
+ # need to save to disk and load again, otherwise it is very slow
65
+ target_directory = "./.temp-dataset"
66
+ shutil.rmtree(target_directory, ignore_errors=True)
67
+ try:
68
+ self.dataset.save_to_disk(target_directory)
69
+ except PermissionError:
70
+ logger.info("Can not save dataset, nothing changed. Skipping...")
71
+ gc.collect()
72
+ self.dataset = load_from_disk(target_directory)
73
+ deduplicated_ds = deduplicate(
74
+ self.dataset,
75
+ column=self.column_name,
76
+ **self.deduplication_config.get("args", {})
77
+ )
78
+ self.dataset = deduplicated_ds.remove_columns("ids")
79
+ return self.dataset
80
+
81
+ def _print_filter_logs(self, filtered_dataset, filter_name):
82
+ original_length = len(self.dataset)
83
+ filtered_length = len(filtered_dataset)
84
+ reduced_percent = round(100 * (original_length - filtered_length) / original_length, 2)
85
+ logger.info(
86
+ f'Filtered by {filter_name} on {self.column_name}:\n'
87
+ f'{reduced_percent}% = {original_length - filtered_length:,} samples reduced\n'
88
+ f'New dataset size: {filtered_length:,} rows'
89
+ )
90
+ if self.verbose:
91
+ utils.print_sample_dropped_examples(self.dataset, filtered_dataset, num_samples=10)
chat_data_pipeline/utils.py ADDED
@@ -0,0 +1,339 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import random
2
+ import re
3
+ from functools import partial
4
+ from collections import Counter
5
+
6
+ from datasets import load_dataset, Dataset, concatenate_datasets
7
+ import numpy as np
8
+ import pandas as pd
9
+ import tqdm
10
+ import yaml
11
+
12
+ from chat_data_pipeline.pipeline import Pipeline, logger
13
+ from chat_data_pipeline import cleaners as cln
14
+ from chat_data_pipeline import filters as ftr
15
+ from chat_data_pipeline.kenlm_model import KenlmModel
16
+
17
+
18
+ def load_yaml(config_path):
19
+ with open(config_path, "r") as f:
20
+ return yaml.safe_load(f)
21
+
22
+
23
+ def get_cleaners_from_config(config):
24
+ cleaner_funcs = []
25
+ cleaners = {}
26
+ if config.get("cleaners") is not None:
27
+ cleaners = config.get("cleaners", {})
28
+ for cleaner, do_clean in cleaners.items():
29
+ if do_clean:
30
+ cleaner_funcs.append(
31
+ getattr(cln, cleaner)
32
+ )
33
+ return cleaner_funcs
34
+
35
+
36
+ def get_filters_from_config(config):
37
+ filter_funcs = []
38
+ filters = {}
39
+ if config.get("filters") is not None:
40
+ filters = config.get("filters", {})
41
+ for filter, value in filters.items():
42
+ args = {}
43
+ if value is not None:
44
+ args = value.get("args", {})
45
+ filter_func = custom_partial(
46
+ getattr(ftr, filter),
47
+ **args
48
+ )
49
+ filter_funcs.append(filter_func)
50
+ return filter_funcs
51
+
52
+
53
+ def get_output_text_cleaners():
54
+ cleaners = [
55
+ cln.normalize_whitespace,
56
+ cln.normalize_punctuation,
57
+ cln.fix_utf8_encoding,
58
+ cln.remove_empty_lines
59
+ ]
60
+ return cleaners
61
+
62
+
63
+ def get_input_text_cleaners():
64
+ cleaners = [
65
+ cln.normalize_whitespace,
66
+ cln.remove_empty_lines
67
+ ]
68
+ return cleaners
69
+
70
+
71
+ def get_output_text_filters(filter_nsfw, filter_perplexity):
72
+ filters = [
73
+ custom_partial(
74
+ ftr.check_word_number,
75
+ min_word_threshold=5,
76
+ max_word_threshold=128,
77
+ ),
78
+ custom_partial(
79
+ ftr.check_completion,
80
+ ),
81
+ custom_partial(
82
+ ftr.check_char_repetition,
83
+ char_repetition_len=10,
84
+ char_repetition_threshold=0.2,
85
+ ),
86
+ custom_partial(
87
+ ftr.check_lowercase_ratio,
88
+ lowercase_threshold=0.75,
89
+ ),
90
+ ]
91
+ if filter_nsfw:
92
+ filters.append(
93
+ custom_partial(
94
+ ftr.check_nsfw_words,
95
+ flagged_words_threshold=0.025,
96
+ ),
97
+ )
98
+ if filter_perplexity:
99
+ filters.append(
100
+ custom_partial(
101
+ ftr.check_perplexity,
102
+ kenlm_model=_get_kenlm_model(),
103
+ min_perplexity_threshold=300,
104
+ max_perplexity_threshold=10_000
105
+ )
106
+ )
107
+ return filters
108
+
109
+
110
+ def _get_kenlm_model():
111
+ kenlm_model = KenlmModel.from_pretrained(
112
+ model_dataset="wikipedia",
113
+ language="en",
114
+ lower_case=True,
115
+ remove_accents=True,
116
+ normalize_numbers=True,
117
+ punctuation=1,
118
+ )
119
+ return kenlm_model
120
+
121
+
122
+ def get_input_text_filters():
123
+ filters = [
124
+ custom_partial(
125
+ ftr.check_lowercase_ratio,
126
+ lowercase_threshold=0.55,
127
+ ),
128
+ ]
129
+ return filters
130
+
131
+
132
+ def get_truncation_filters(splitter_token):
133
+ filters = [
134
+ custom_partial(
135
+ ftr.check_truncation,
136
+ splitter_token=splitter_token
137
+ ),
138
+ ]
139
+ return filters
140
+
141
+
142
+ def custom_partial(func, **args):
143
+ partial_func = partial(func, **args)
144
+ partial_func.__name__ = func.__name__
145
+ partial_func.__module__ = func.__module__
146
+ return partial_func
147
+
148
+
149
+ def print_sample_dropped_examples(dataset, new_dataset, num_samples=5):
150
+ original_ids = dataset["ids"]
151
+ new_ids = new_dataset["ids"]
152
+ dropped_ids = set(original_ids) - set(new_ids)
153
+ num_samples = min(len(dropped_ids), num_samples)
154
+ ids_to_show = random.sample(list(dropped_ids), num_samples)
155
+ for id in ids_to_show:
156
+ logger.info(f"Dropped sample: {dataset[id]}")
157
+
158
+
159
+ # Pipeline does not add column_name to newly added column with scores
160
+ def rename_dry_run_columns(dataset, filter_column_name):
161
+ column_names = set(dataset.column_names)
162
+ column_names = column_names - {"output_text", "input_text", "summary", "user_id"}
163
+ columns_mapping = dict()
164
+ for column_name in column_names:
165
+ # Check if column already renamed by previous call of this function
166
+ if "__" not in column_name:
167
+ columns_mapping[column_name] = filter_column_name + "__" + column_name
168
+ dataset = dataset.rename_columns(columns_mapping)
169
+ return dataset
170
+
171
+
172
+ def get_edit_dataset(dataset_path):
173
+ dataset = load_dataset(dataset_path, split="train", keep_in_memory=False)
174
+ dataset = prepare_edit_dataset(dataset)
175
+ return dataset
176
+
177
+
178
+ def prepare_edit_dataset(dataset):
179
+ columns_mapping = {
180
+ "model_input": "input_text",
181
+ "edited_response": "output_text",
182
+ }
183
+ dataset = dataset.rename_columns(columns_mapping)
184
+ columns_to_keep = list(columns_mapping.values()) + ["user_id", "response"]
185
+ columns_to_remove = set(dataset.column_names) - set(columns_to_keep)
186
+ dataset = dataset.remove_columns(columns_to_remove)
187
+ return dataset
188
+
189
+
190
+ def remove_unused_columns(dataset):
191
+ columns_to_keep = ["user_id", "input_text", "output_text"]
192
+ columns_to_remove = set(dataset.column_names) - set(columns_to_keep)
193
+ dataset = dataset.remove_columns(columns_to_remove)
194
+ return dataset
195
+
196
+
197
+ def post_process_output_text(dataset):
198
+ df = dataset.to_pandas()
199
+ func = lambda x: " " + cln.clean_new_lines(x["output_text"]) + "\n"
200
+ df["output_text"] = df.progress_apply(func, axis=1)
201
+ dataset = Dataset.from_pandas(df)
202
+ return dataset
203
+
204
+
205
+ def sample_datasets(datasets, proportions, target_size):
206
+ target_size = min(
207
+ [target_size] + [len(dataset) / proportion for proportion, dataset in zip(proportions, datasets)]
208
+ )
209
+ sampled_datasets = []
210
+ for proportion, dataset in zip(proportions, datasets):
211
+ sample_proportion = (target_size * proportion) / len(dataset)
212
+ sampled_dataset = sample_dataset(dataset, sample_proportion)
213
+ sampled_datasets.append(sampled_dataset)
214
+ merged_dataset = concatenate_datasets(sampled_datasets)
215
+ return merged_dataset
216
+
217
+
218
+ def sample_dataset(dataset, size):
219
+ df = dataset.to_pandas()
220
+ grouped = df.groupby('user_id')
221
+ sample_groups = []
222
+ for _, sub_group in tqdm.tqdm(grouped):
223
+ sample_groups.append(_get_sample_group(sub_group, size=size))
224
+
225
+ df_subset = pd.concat(sample_groups)
226
+ df_subset = df_subset.drop(['__index_level_0__'], axis=1, errors='ignore')
227
+ dataset_subset = Dataset.from_pandas(df_subset)
228
+ return dataset_subset
229
+
230
+
231
+ def _get_sample_group(group, size):
232
+ # helps with sampling superusers and do not touch small groups
233
+ if len(group) >= 5:
234
+ num_samples = int(len(group) * size)
235
+ group = group.sample(num_samples)
236
+ return group
237
+
238
+
239
+ def split_dataset_by_filter(dataset, column_name, filter_func):
240
+ dataset_length = len(dataset)
241
+ ids = range(dataset_length)
242
+ dataset = dataset.add_column("ids", ids)
243
+ filtered_dataset = run_filter(dataset, column_name, filter_func, dry_run=False)
244
+
245
+ difference_dataset = _dataset_subtraction(dataset, filtered_dataset)
246
+
247
+ filtered_dataset = filtered_dataset.remove_columns("ids")
248
+ difference_dataset = difference_dataset.remove_columns("ids")
249
+
250
+ return filtered_dataset, difference_dataset
251
+
252
+
253
+ def run_filter(dataset, column_name, filter_func, dry_run):
254
+ datasources = [
255
+ {
256
+ "dataset": dataset,
257
+ "name": "dataset",
258
+ "columns": [column_name],
259
+ "filters": [filter_func],
260
+ "cleaners": [],
261
+ },
262
+ ]
263
+ pipeline = Pipeline(datasources)
264
+ pipeline.run(dry_run=dry_run)
265
+ filtered_dataset = pipeline.datasources[0]["dataset"]
266
+ return filtered_dataset
267
+
268
+
269
+ def run_cleaner(dataset, column_name, cleaners):
270
+ datasources = [
271
+ {
272
+ "dataset": dataset,
273
+ "name": "dataset",
274
+ "columns": [column_name],
275
+ "filters": [],
276
+ "cleaners": cleaners,
277
+ },
278
+ ]
279
+ pipeline = Pipeline(datasources)
280
+ pipeline.run(dry_run=True)
281
+ dataset = pipeline.datasources[0]["dataset"]
282
+ return dataset
283
+
284
+
285
+ def _dataset_subtraction(minuend_dataset, subtrahend_dataset):
286
+ original_ids = minuend_dataset["ids"]
287
+ filtered_ids = subtrahend_dataset["ids"]
288
+ dropped_ids = set(original_ids) - set(filtered_ids)
289
+ original_df = minuend_dataset.to_pandas()
290
+ difference_df = original_df[original_df.ids.isin(dropped_ids)]
291
+ difference_df = difference_df.drop(['__index_level_0__'], axis=1, errors='ignore')
292
+ difference_dataset = Dataset.from_pandas(difference_df)
293
+ return difference_dataset
294
+
295
+
296
+ def add_concatenated_column(dataset, column_name, special_token):
297
+ dataframe = dataset.to_pandas()
298
+ func = lambda x: x["response"] + special_token + x["output_text"]
299
+ dataframe[column_name] = dataframe.progress_apply(func, axis=1)
300
+ dataset = Dataset.from_pandas(dataframe)
301
+ return dataset
302
+
303
+
304
+ def get_words(text):
305
+ return re.findall(r'\w+', text.lower())
306
+
307
+
308
+ # Adapted from:
309
+ # https://github.com/CarperAI/squeakily/blob/ba81f6e11fab424794d46cbf06d398ea2ad4a7f1/squeakily/filter.py#L81
310
+ def get_char_repetition_ratio(doc, char_rep_len):
311
+ freq_char_ngrams = _get_frequency_ngrams(
312
+ doc, char_rep_len
313
+ )
314
+ if len(freq_char_ngrams) == 0:
315
+ return 0
316
+ char_rep_ratio = _calculate_char_repetition_ratio(freq_char_ngrams)
317
+ return char_rep_ratio
318
+
319
+
320
+ def _calculate_char_repetition_ratio(freq_char_ngrams):
321
+ freq_char_ngrams = list(freq_char_ngrams.values())
322
+ freq_char_ngrams = sorted(freq_char_ngrams, reverse=True)
323
+ val_one = len([el for el in freq_char_ngrams if el == 1])
324
+ num_rep_char_ngrams = min(
325
+ int(np.sqrt(len(freq_char_ngrams))),
326
+ len(freq_char_ngrams) - val_one,
327
+ )
328
+ char_rep_ratio = sum(
329
+ freq_char_ngrams[:num_rep_char_ngrams]
330
+ ) / sum(freq_char_ngrams)
331
+ return char_rep_ratio
332
+
333
+
334
+ def _get_frequency_ngrams(doc, n):
335
+ char_ngrams = [
336
+ doc[i: i + n] for i in range(len(doc) - n + 1)
337
+ ]
338
+ freq_char_ngrams = Counter(char_ngrams)
339
+ return freq_char_ngrams
experiments/instructions/vicuna-v0.yaml ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ datasets:
2
+ - dataset_path: "AlekseyKorshuk/gpteacher-instruct-chatml"
3
+ - dataset_path: "AlekseyKorshuk/sharegpt-chatml"
4
+ - dataset_path: "AlekseyKorshuk/gpt4-llm-cleaned-chatml"
5
+
6
+ output_dataset_path: "AlekseyKorshuk/vicuna-v0-chatml"
7
+ verbose: False
8
+
9
+ instruction_config:
10
+ cleaners:
11
+ filters:
12
+ check_word_number:
13
+ args:
14
+ min_word_threshold: 2
15
+ max_word_threshold: 9999999
16
+
17
+ deduplication:
18
+ do_deduplication: True
19
+ minhash_config:
20
+ ngram_size: 5
21
+ num_perm: 256
22
+ threshold: 0.7
23
+ min_ngram_size: 5
24
+
25
+ response_config:
26
+ cleaners:
27
+ filters:
28
+ check_word_number:
29
+ args:
30
+ min_word_threshold: 10
31
+ max_word_threshold: 9999999
32
+ check_ethics:
33
+
34
+ deduplication:
35
+ do_deduplication: True
36
+ minhash_config:
37
+ ngram_size: 5
38
+ num_perm: 256
39
+ threshold: 0.7
40
+ min_ngram_size: 5
41
+
42
+
experiments/tiny-example.yaml ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ datasets:
2
+ - dataset_path: "AlekseyKorshuk/gpteacher-instruct-chatml"
3
+
4
+ output_dataset_path: "AlekseyKorshuk/tiny-example-chatml"
5
+ verbose: False
6
+
7
+ instruction_config:
8
+ cleaners:
9
+ filters:
10
+ check_word_number:
11
+ args:
12
+ min_word_threshold: 2
13
+ max_word_threshold: 9999999
14
+
15
+ deduplication:
16
+ do_deduplication: True
17
+ minhash_config:
18
+ ngram_size: 5
19
+ num_perm: 256
20
+ threshold: 0.7
21
+ min_ngram_size: 5
22
+
23
+ response_config:
24
+ cleaners:
25
+ fix_utf8_encoding: true
26
+ filters:
27
+ check_word_number:
28
+ args:
29
+ min_word_threshold: 10
30
+ max_word_threshold: 9999999
31
+ check_ethics:
32
+
33
+ deduplication:
34
+ do_deduplication: True
35
+ minhash_config:
36
+ ngram_size: 5
37
+ num_perm: 256
38
+ threshold: 0.7
39
+ min_ngram_size: 5
40
+
41
+
main.py ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import click
4
+ from datasets import load_dataset, concatenate_datasets
5
+
6
+ from chat_data_pipeline.pipeline import logger
7
+ from chat_data_pipeline import utils
8
+ from chat_data_pipeline.preprocessor import DataPreprocessor
9
+
10
+ PAD = 32
11
+
12
+
13
+ @click.command()
14
+ @click.option('--config_path')
15
+ def main(config_path):
16
+ config = utils.load_yaml(config_path)
17
+ dataset_paths = [dataset["dataset_path"] for dataset in config["datasets"]]
18
+ output_dataset_path = config["output_dataset_path"]
19
+ verbose = config.get("verbose", False)
20
+
21
+ instruction_config = config["instruction_config"]
22
+ response_config = config["response_config"]
23
+
24
+ dataset = combine_datasets(dataset_paths)
25
+
26
+ dataset = dataset.map(
27
+ convert_to_input_output,
28
+ batched=True,
29
+ num_proc=os.cpu_count(),
30
+ remove_columns=list(dataset.features),
31
+ desc="Converring to I/O..."
32
+ )
33
+
34
+ dataset = dataset.map(
35
+ add_content_columns,
36
+ batched=False,
37
+ num_proc=os.cpu_count(),
38
+ desc="Adding content column..."
39
+ )
40
+
41
+ print(utils.get_cleaners_from_config(response_config))
42
+ print(utils.get_filters_from_config(response_config))
43
+ print(response_config.get("deduplication", {}))
44
+ preprocessor = DataPreprocessor(
45
+ dataset=dataset,
46
+ column_name="response",
47
+ cleaners=utils.get_cleaners_from_config(response_config),
48
+ filters=utils.get_filters_from_config(response_config),
49
+ deduplication_config=response_config.get("deduplication", {}),
50
+ verbose=verbose,
51
+ )
52
+ dataset = preprocessor.run()
53
+
54
+ cleaners = utils.get_cleaners_from_config(instruction_config)
55
+ if len(cleaners) > 0:
56
+ logger.warning("Cleaner does not work on instructions. Cleaners set to empty list.")
57
+ preprocessor = DataPreprocessor(
58
+ dataset=dataset,
59
+ column_name="instruction",
60
+ cleaners=[],
61
+ filters=utils.get_filters_from_config(instruction_config),
62
+ deduplication_config=instruction_config.get("deduplication", {}),
63
+ verbose=verbose,
64
+ )
65
+ dataset = preprocessor.run()
66
+
67
+ prepared_dataset_chatml = dataset.map(
68
+ convert_to_chatml,
69
+ batched=False,
70
+ num_proc=os.cpu_count(),
71
+ remove_columns=list(dataset.features)
72
+ )
73
+ prepared_dataset_chatml = prepared_dataset_chatml.shuffle(seed=42)
74
+ prepared_dataset_chatml.push_to_hub(output_dataset_path)
75
+ logger.info(prepared_dataset_chatml)
76
+
77
+
78
+ def combine_datasets(dataset_paths):
79
+ datasets = []
80
+ for dataset_path in dataset_paths:
81
+ dataset = load_dataset(dataset_path)
82
+ dataset = concatenate_datasets(list(dataset.values()))
83
+ if "source" not in dataset.features:
84
+ dataset = dataset.add_column("source", [dataset_path] * len(dataset))
85
+ datasets.append(dataset)
86
+ dataset = concatenate_datasets(datasets)
87
+ return dataset
88
+
89
+
90
+ def convert_to_input_output(examples):
91
+ sources = []
92
+ inputs = []
93
+ outputs = []
94
+ for conversation, source in zip(examples["conversation"], examples["source"]):
95
+ input = []
96
+ for message in conversation:
97
+ if message["do_train"]:
98
+ inputs.append(input.copy())
99
+ outputs.append(message)
100
+ sources.append(source)
101
+ input.append(message)
102
+ return {
103
+ "input": inputs,
104
+ "output": outputs,
105
+ "source": sources
106
+ }
107
+
108
+
109
+ def add_content_columns(example):
110
+ response = example["output"]["content"].strip()
111
+ instruction = ""
112
+ if len(example["input"]) > 0:
113
+ instruction = example["input"][-1]["content"].strip()
114
+ return {
115
+ "instruction": instruction,
116
+ "response": response,
117
+ }
118
+
119
+
120
+ def convert_to_chatml(example):
121
+ conversation = []
122
+ for message in example["input"]:
123
+ message["do_train"] = False
124
+ conversation.append(message)
125
+ conversation.append(
126
+ {
127
+ "content": example["response"],
128
+ "role": example["output"]["role"],
129
+ "do_train": True,
130
+ }
131
+ )
132
+ return {
133
+ "conversation": conversation,
134
+ "source": example["source"]
135
+ }
136
+
137
+
138
+ if __name__ == "__main__":
139
+ main()
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ ftfy==6.1.1
2
+ https://github.com/kpu/kenlm/archive/master.zip
3
+ sentencepiece==0.1.97
4
+ datasketch==1.5.8
5
+ dpu_utils==0.6.0
6
+ datasets==2.11.0
7
+ click==8.1.3
8
+ rich==13.3.4
9
+ typer==0.9.0