applied-ai-018's picture
Add files using upload-large-folder tool
dd4da36 verified
# cc-multilingual
Downloading and dedup indic multi-lingual from CommonCrawl
### Installation for cc_net
```sh
cd cc_net/
make install .
```
### Choose a snapshot: snapshot-id
#### Step 1: Edit the config.myconfig.json file
```json
"dump": "snapshot-id",
"num_shards": 1600,
"lang_whitelist": ["as","bn","gu","kn","hi","ml","mr","ne","or","pb","sa","sd","ta","ur","te","ks","sat","mai","mni","kok","doi","brx"],
"mine_num_processes": 16,
"pipeline": [
"lid",
"keep_lang",
"pp_bucket",
"split_by_lang"
],
"target_size": "100M",
"output_dir": "data",
"mined_dir": "mined",
"cache_dir": "wet_cache"
```
#### Step 2: (Optional) Download data into cache
```sh
wget wet_file_path
python3 script.py wet.paths.gz 90 wet_cache/2023-40/
```
#### Step 3: Run the pipeline
```sh
python3 -m cc_net --config config/myconfig.json
```
## Deduplication
```
pip install app/requirements.txt
```
#### Step1: Add list of files downloaded from cc_net to listings/file.txt in format lang_shard.json.gz
#### Step 2: Computing minhash signatures
```
python3 app/src/pipeline.py --input_base_uri "file://path/to/ccnet/data" --output_base_uri "/path/to/output" --artifacts_dir "file:///path/to/empty/artifacts" --input /path/to/listings/file.txt --cc_snapshot_id 2023-50 --langs "hi" --inputs_per_process 5 --minhash_num_permutations 128 --minhash_ngram_size 13
```
#### Step 3: Applying bloomfilter
```
python3 app/src/bloomfilter.py --listings /path/to/listings/file.txt --input_base_uri "file://path/to/ccnet/data" --output_dir "/path/to/output" --parallel_readers 32 --batch_size 10
```
#### Step 4: Running LSH
```
python3 app/src/run_lsh.py --listings "/path/to/minhash-signature/listings/file.txt" --input_base_uri "file:///path/to/minhash-signature/files" --output_dir "/path/to/output" --similarity "0.8" --num_perm "128"
```