The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/usr/local/lib/python3.9/tarfile.py", line 191, in nti n = int(s.strip() or "0", 8) ValueError: invalid literal for int() with base 8: '1d/archi' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/tarfile.py", line 2588, in next tarinfo = self.tarinfo.fromtarfile(self) File "/usr/local/lib/python3.9/tarfile.py", line 1292, in fromtarfile obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors) File "/usr/local/lib/python3.9/tarfile.py", line 1234, in frombuf chksum = nti(buf[148:156]) File "/usr/local/lib/python3.9/tarfile.py", line 193, in nti raise InvalidHeaderError("invalid header") tarfile.InvalidHeaderError: invalid header During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 32, in _get_pipeline_from_tar for filename, f in tar_iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/track.py", line 49, in __iter__ for x in self.generator(*self.args): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1343, in _iter_from_urlpath yield from cls._iter_tar(f) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1294, in _iter_tar stream = tarfile.open(fileobj=f, mode="r|*") File "/usr/local/lib/python3.9/tarfile.py", line 1822, in open t = cls(name, filemode, stream, **kwargs) File "/usr/local/lib/python3.9/tarfile.py", line 1703, in __init__ self.firstmember = self.next() File "/usr/local/lib/python3.9/tarfile.py", line 2600, in next raise ReadError(str(e)) tarfile.ReadError: invalid header The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for wa-hls4ml Benchmark Dataset
The wa-hls4ml projects dataset, comprized of the Vivado/Vitis projects of neural networks converted into HLS Code via hls4ml. Projects are complete, and include all logs, HLS Code, VHDL Code, Intermediete Representations, and source keras models.
This is a companion dataset to the wa-hls4ml dataset. There is a reference CSV for each model type that contains a reference to each individual model name, the artifacts file for that model, and which batch archive file contains that project archive for that model.
PLEASE NOTE: The dataset is currently incomplete, and is in the process of being cleaned and uploaded.
Dataset Details
We introduce wa-hls4ml[^1]: a dataset unprecedented in scale and features and a benchmark for common evaluation
The open dataset is unprecedented in terms of its size, with over 680,000 fully synthesized dataflow models.
The goal is to continue to grow and extend the dataset over time.
We include all steps of the synthesis chain from ML model to HLS representation to register-transfer level (RTL) and save the full logs.
This will enable a much broader set of applications beyond those in this paper.
The benchmark standardizes evaluation of the performance of resource usage and latency estimators across a suite of metrics, such as the coefficient of determination (R^2), symmetric mean absolute percentage error (SMAPE), and root mean square error (RMSE), and provides sample models, both synthetic and from scientific applications, to support and encourage the continued development of better surrogate models.
[^1]: Named after Wario and Waluigi who are doppelgΓ€ngers of Mario and Luigi, respectively, in the Nintendo Super Mario platform game series.
Dataset Description
The dataset has two primary components, each designed to test different aspects of a surrogate model's performance.
The first part is based on synthetic neural networks generated with various layer types, micro-architectures, and precisions.
This synthetic dataset lets us systematically explore the FPGA resources and latencies as we vary different model parameters.
The second part of the benchmark targets models from exemplar realistic scientific applications, requiring real-time processing at the edge, near the data sources.
Models with real-time constraints constitute a primary use case for ML-to-FPGA pipelines like hls4ml.
This part tests the ability of the surrogate model to extrapolate its predictions to new configurations and architectures beyond the training set, assessing the model's robustness and performance for real applications.
Exemplar Model Descriptions
- Jet: A fully connected neural network that classifies simulated particle jets originating from one of five particle classes in high-energy physics experiments.
- Top Quarks: A binary classifier for top quark jets, helping probe fundamental particles and their interactions.
- Anomaly: An autoencoder trained on audio data to reproduce the input spectrogram, whose loss value differentiates between normal and abnormal signals.
- BiPC: An encoder that transforms high-resolution images, producing sparse codes for further compression.
- CookieBox: Dedicated to real-time data acquisition for the CookieBox system, designed for advanced experimental setups requiring rapid handling of large data volumes generated by high-speed detectors.
- AutoMLP: A fully connected network from the AutoMLP framework, focusing on accelerating MLPs on FPGAs, providing significant improvements in computational performance and energy efficiency.
- Particle Tracking: Tracks charged particles in real-time as they traverse silicon detectors in large-scale particle physics experiments.
Exemplar Model Architectures
Model | Size | Input | Architecture |
---|---|---|---|
Jet | 2,821 | 16 | β[ReLU]32 β[ReLU]32 β[ReLU]32 β[Softmax]5 |
Top Quarks | 385 | 10 | β[ReLU]32 β[Sigmoid]1 |
Anomaly | 2,864 | 128 | β[ReLU]8 β[ReLU]4 β[ReLU]128 β[ReLU]4 β[Softmax]128 |
BiPC | 7,776 | 36 | β[ReLU]36 β[ReLU]36 β[ReLU]36 β[ReLU]36 β[ReLU]36 |
CookieBox | 3,433 | 512 | β[ReLU]4 β[ReLU]32 β[ReLU]32 β[Softmax]5 |
AutoMLP | 534 | 7 | β[ReLU]12 β[ReLU]16 β[ReLU]12 β[Softmax]2 |
Particle Tracking | 2,691 | 14 | β[ReLU]32 β[ReLU]32 β[ReLU]32 β[Softmax]3 |
- Curated by: Fast Machine Learning Lab
- Funded by: See "Acknowledgements" in the paper for full funding details
- Language(s) (NLP): English
- License: cc-by-nc-4.0
Dataset Sources
The Dataset was consists of data generated by the authors using the following methods:
Generation of Synthetic Data
The train, validation, and test sets were created by first generating models of varying architectures in the Keras and QKeras Python libraries, varying their hyperparameters.
The updated rule4ml dataset follows the same generation method and hyperparameter ranges described in prior work, while adding II information and logic synthesis results to the reports.
For the remaining subsets of the data, the two-layer and three-layer fully-connected models were generated using a grid search method according to the parameter ranges mentioned below, whereas larger fully-connected models and convolutional models (one- and two-dimensional) were randomly generated, where convolutional models also contain dense, flatten, and pooling layers.
The weight and bias precision was implemented in HLS as datatype ap_fixed<X,1>
, where X
is the specified precision and the total number of bits allocated to the weight and bias values, with one bit being reserved for the integer portion of the value.
These models were then converted to HLS using hls4ml and synthesized through AMD Vitis version 2023.2 and 2024.2, targeting the AMD Xilinx Alveo U250 FPGA board.
The model sets have the following parameter ranges:
- Number of layers: 2β7 for fully-connected models; 3β7 for convolutional models.
- Activation functions: Linear for most 2β3 layer fully-connected models; ReLU, tanh, and sigmoid for all other fully-connected models and convolutional models.
- Number of features/neurons: 8β128 (step size: 8 for 2β3 layer) for fully-connected models; 32β128 for convolution models with 8β64 filters.
- Weight and bias bit precision: 2β16 bits (step size: 2) for 2β3 layer fully-connected models, 4β16 bits (step size: powers of 2) for 3β7 layer fully-connected and convolutional models.
- hls4ml target reuse factor: 1β4093 for fully-connected models; 8192β32795 for convolutional models.
- hls4ml implementation strategy: Resource strategy, which controls the degree of parallelism by explicitly specifying the number of MAC operations performed in parallel per clock cycle, is used for most fully-connected models and all convolutional models, while Latency strategy, where the computation is unrolled, is used for some 3β7 layer fully-connected models.
- hls4ml I/O type: The io_parallel setting, which directly wires the output of one layer to the input of the next layer, is used for all fully-connected models, and the io_stream setting, which uses FIFO buffers between layers, is used for all convolutional models.
Exemplar Model Synthesis Parameters
The exemplar models were synthesized with the following parameters:
Hyperparameter | Values |
---|---|
Precision | ap_fixed<2,1> , ap_fixed<8,3> , ap_fixed<16,6> |
Strategy | Latency , Resource |
Target reuse factor | 1, 128, 1024 |
Target board | Alveo U200, Alveo U250 |
Target clock | 5 ns, 10 ns |
Vivado version | 2019.1, 2020.1 |
The synthesis was repeated multiple times, varying the hls4ml reuse factor, a tunable setting that proportionally limits the number of multiplication operations used.
The hls4ml conversion, HLS synthesis, and logic synthesis of the train and test sets were all performed in parallel on the National Research Platform Kubernetes Hypercluster and the Texas A&M ACES HPRC Cluster.
On the National Research Platform, synthesis was run inside a container with a guest OS of Ubuntu 20.04.4 LTS, the containers being slightly modified versions of the xilinx-docker v2023.2 "user" images, with 3 virtual CPU cores and 16 GB of RAM per pod, with all AMD tools mounted through a Ceph-based persistent volume.
Jobs run on the Texas A&M ACES HPRC Cluster were run using Vitis 2024.2, each with 2 virtual CPU cores and 32 GB of RAM.
The resulting projects, reports, logs, and a JSON file containing the resource/latency usage and estimates of the C and logic synthesis were collected for each sample in the dataset.
The data, excluding the projects and logs, were then further processed into a collection of JSON files, distributed alongside this paper and described below.
- Repository: fastmachinelearning/wa-hls4ml-paper
- Paper [optional]: In Review
Uses
This dataset is intended to be used to train/refine LLMs to better generate HLS and VHDL code, along with improving the understanding of the general C- and Logic-Synthesis processes to better assist in debugging and question answering for FPGA tooling and hls4ml.
Direct Use
This dataset is generated using the tool hls4ml, and should be used to train LLMs and/or other models for HLS/VHDL Code generation along with improving qustion answering and understanding of (currently) Vivado/Vitis and hls4ml workflows.
Out-of-Scope Use
As this dataset is generated using the hls4ml and Vivado/Vitis tools, it should not be used to train LLMs and/or other models for other tools, as results and implementation details may vary across those tools compared to hls4ml and Vivado/Vitis.
Dataset Structure
Within each subset, excluding the exemplar test set, the data is grouped as follows.
- 2_20 (rule4ml): The updated rule4ml dataset, containing fully-connected neural networks that were randomly generated with layer counts between 2 and 20 layers, using hls4ml resource and latency strategies.
- 2_layer: A subset containing 2-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.
- 3_layer: A subset containing 3-layer deep fully-connected neural networks generated via a grid search using hls4ml resource and io_parallel strategies.
- conv1d: A subset containing 3β7 layer deep 1-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.
- conv2d: A subset containing 3β7 layer deep 2-dimensional convolutional neural networks that were randomly generated and use hls4ml resource and io_stream strategies.
- latency: A subset containing 3β7 layer deep fully-connected neural networks that were randomly generated and use hls4ml latency and io_parallel strategies.
- resource: A subset containing 3β7 layer deep fully-connected neural networks that were randomly generated and use hls4ml resource and io_parallel strategies.
Structure of the CSV Index files
There is one CSV index file for each model type split. Each file has 3 fields:
- Model Name: The name of the model that you can use to reference the corresponding JSON file in the wa-hls4ml dataset. Note: you will need to split the string at the last
_
character to find the corresponding JSON file. The string to the left of the_
is the source model name, and the string to the right is the target reuse factor for that specific project. - Artifacts File: The name of the specific artifacts file that contains the project for the specified model.
- Archive Name: The name of the archive that contains the specific artifacts file for the specified model.
Structure of the project archives
Due to file size/count limitations, the individual project archives are split into batches and placed into one of a number of tar.gz files. If you are looking to find a specific project file, please refer to the Index CSV file as mentioned above.
Each project archive contains the complete Vivado/Vitis project in it's original structure, including resulting HLS and VHDL Code, logs, reports, intermediete representations, and source keras model file.
Curation Rationale
With the introduction of ML into FPGA toolchains, e.g., for resource and latency prediction or code generation, there is a significant need for large datasets to support and train these tools.
We found that existing datasets were insufficient for these needs, and therefore sought to build a dataset and a highly scalable data generation framework that is useful for a wide variety of research surrounding ML on FPGAs.
This dataset serves as one of the few openly accessible, large-scale collections of synthesized neural networks available for ML research.
Exemplar Realistic Models
The exemplar models utilized in this study include several key architectures, each tailored for specific ML tasks and targeting scientific applications with low-latency constraints.
Source Data
The data was generated via randomly generated neural networks and specifically selected exemplar models, converted into HLS Code via hls4ml, with the resulting latency values collected after performing C-Synthesis through Vivado/Vitis HLS on the resulting HLS Code, and resource values collected after performing logic synthesis through Vivado/Vitis on the resulting HDL Code. The projects were then stored in a tar.gz file and distributed in this dataset.
Who are the source data producers?
Benjamin Hawks, Fermi National Accelerator Laboratory, USA
Hamza Ezzaoui Rahali, University of Sherbrooke, Canada
Mohammad Mehdi Rahimifar, University of Sherbrooke, Canada
Personal and Sensitive Information
This data contains no personally identifiable or sensitive information except for the names/usernames of the authors in some file paths.
Bias, Risks, and Limitations
In it's inital form, a majority of this dataset is comprised of very small (2-3 layer) dense neural networks without activations. This should be considered when training a model on it, and appropriate measures should be taken to weight the data at training time. We intend to continuously update this dataset, addressing this imbalance over time as more data is generated.
Recommendations
Appropriate measures should be taken to weight the data to account for the dataset imbalance at training time.
Citation
Paper currently in review.
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Dataset Card Authors
Benjamin Hawks, Fermi National Accelerator Laboratory, USA
Hamza Ezzaoui Rahali, University of Sherbrooke, Canada
Mohammad Mehdi Rahimifar, University of Sherbrooke, Canada
Dataset Card Contact
- Downloads last month
- 308