File size: 3,189 Bytes
99885e2
 
 
 
 
 
 
 
e570468
 
 
 
2596182
e570468
 
 
 
 
 
 
 
 
2596182
 
 
 
 
 
 
 
 
cdf0f95
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: cc-by-4.0
tags:
- math
- cryptography
pretty_name: Datasets for Learning the Learning with Errors Problem
size_categories:
- 100M<n<1B
---

# TAPAS: Datasets for Learning the Learning with Errors Problem

## About this Data
AI-powered attacks on Learning with Errors (LWE)—an important hard math problem in post-quantum cryptography—rival or outperform "classical" attacks on LWE under certain parameter settings. Despite the promise of this approach, a dearth of accessible data limits AI practitioners' ability to study and improve these attacks. Creating LWE data for AI model training is time- and compute-intensive and requires significant domain expertise. To fill this gap and accelerate AI research on LWE attacks, we propose the TAPAS datasets, a **t**oolkit for **a**nalysis of **p**ost-quantum cryptography using **A**I **s**ystems. These datasets cover several LWE settings and can be used off-the-shelf by AI practitioners to prototype new approaches to cracking LWE. 

The table below gives an overview of the datasets provided in this work:
| n    | log q | omega | rho | # samples |
|--------|-----------|----------|--------|------------|
| 256  | 20      | 10       | 0.4284 | 400M       |
| 512  | 12      | 10       | 0.9036 | 40M        |
| 512  | 28      | 10       | 0.6740 | 40M        |
| 512  | 41      | 10       | 0.3992 | 40M        |
| 1024 | 26      | 10       | 0.8600 | 40M        |

## Usage

These datasets are intended to be used in conjunction with the code at: https://github.com/facebookresearch/LWE-benchmarking

Download and unzip the .tar.gz files into a directory with enough storage. For the datasets split into different chunks, concatenate all the files into one data.prefix file after unzipping.

Then, follow the instructions in this [README](https://github.com/facebookresearch/LWE-benchmarking/blob/main/README.md) to generate the full sets of LWE pairs and train AI models on this data.


Due to storage constraints, we only provide 40M of the n=256 data here on huggingface. The rest can be found at this directory (append filenames ranging from chunk_ab.tar.gz to chunk_aj.tar.gz to download): http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/

Here are the exact links to each remaining section of the n=256 data (each link has 40M examples): 
[1](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ab.tar.gz) 
[2](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ac.tar.gz)
[3](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ad.tar.gz) 
[4](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ae.tar.gz)
[5](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_af.tar.gz) 
[6](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ag.tar.gz)
[7](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ah.tar.gz) 
[8](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_ai.tar.gz)
[9](http://dl.fbaipublicfiles.com/large_objects/lwe-benchmarking/n256_logq20/chunk_aj.tar.gz)