File size: 1,831 Bytes
dd4da36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
# cc-multilingual
Downloading and dedup indic multi-lingual from CommonCrawl
### Installation for cc_net
```sh
cd cc_net/
make install .
```
### Choose a snapshot: snapshot-id
#### Step 1: Edit the config.myconfig.json file
```json
"dump": "snapshot-id",
"num_shards": 1600,
"lang_whitelist": ["as","bn","gu","kn","hi","ml","mr","ne","or","pb","sa","sd","ta","ur","te","ks","sat","mai","mni","kok","doi","brx"],
"mine_num_processes": 16,
"pipeline": [
"lid",
"keep_lang",
"pp_bucket",
"split_by_lang"
],
"target_size": "100M",
"output_dir": "data",
"mined_dir": "mined",
"cache_dir": "wet_cache"
```
#### Step 2: (Optional) Download data into cache
```sh
wget wet_file_path
python3 script.py wet.paths.gz 90 wet_cache/2023-40/
```
#### Step 3: Run the pipeline
```sh
python3 -m cc_net --config config/myconfig.json
```
## Deduplication
```
pip install app/requirements.txt
```
#### Step1: Add list of files downloaded from cc_net to listings/file.txt in format lang_shard.json.gz
#### Step 2: Computing minhash signatures
```
python3 app/src/pipeline.py --input_base_uri "file://path/to/ccnet/data" --output_base_uri "/path/to/output" --artifacts_dir "file:///path/to/empty/artifacts" --input /path/to/listings/file.txt --cc_snapshot_id 2023-50 --langs "hi" --inputs_per_process 5 --minhash_num_permutations 128 --minhash_ngram_size 13
```
#### Step 3: Applying bloomfilter
```
python3 app/src/bloomfilter.py --listings /path/to/listings/file.txt --input_base_uri "file://path/to/ccnet/data" --output_dir "/path/to/output" --parallel_readers 32 --batch_size 10
```
#### Step 4: Running LSH
```
python3 app/src/run_lsh.py --listings "/path/to/minhash-signature/listings/file.txt" --input_base_uri "file:///path/to/minhash-signature/files" --output_dir "/path/to/output" --similarity "0.8" --num_perm "128"
```
|