Spaces:
Running
Running
update README.md
Browse filesSigned-off-by: Lei Hsiung <[email protected]>
README.md
CHANGED
|
@@ -1,25 +1,29 @@
|
|
| 1 |
---
|
| 2 |
title: NeuralFuse
|
| 3 |
-
emoji:
|
| 4 |
colorFrom: yellow
|
| 5 |
colorTo: indigo
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
#
|
| 11 |
|
| 12 |
-
|
| 13 |
|
| 14 |
-
|
| 15 |
-
```
|
| 16 |
-
@article{park2021nerfies
|
| 17 |
-
author = {Park, Keunhong and Sinha, Utkarsh and Barron, Jonathan T. and Bouaziz, Sofien and Goldman, Dan B and Seitz, Steven M. and Martin-Brualla, Ricardo},
|
| 18 |
-
title = {Nerfies: Deformable Neural Radiance Fields},
|
| 19 |
-
journal = {ICCV},
|
| 20 |
-
year = {2021},
|
| 21 |
-
}
|
| 22 |
-
```
|
| 23 |
|
| 24 |
-
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: NeuralFuse
|
| 3 |
+
emoji: ⚡
|
| 4 |
colorFrom: yellow
|
| 5 |
colorTo: indigo
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# NeuralFuse
|
| 11 |
|
| 12 |
+
Official project page of the paper "[NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes](https://arxiv.org/abs/2306.16869)."
|
| 13 |
|
| 14 |
+

|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
+
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains problematically high. An effective strategy for reducing such consumption is supply-voltage reduction, but if done too aggressively, it can lead to accuracy degradation. This is due to random bit-flips in static random access memory (SRAM), where model parameters are stored. To address this challenge, we have developed NeuralFuse, a novel add-on module that handles the energy-accuracy tradeoff in low-voltage regimes by learning input transformations and using them to generate error-resistant data representations, thereby protecting DNN accuracy in both nominal and low-voltage scenarios. As well as being easy to implement, NeuralFuse can be readily applied to DNNs with limited access, such cloud-based APIs that are accessed remotely or non-configurable hardware. Our experimental results demonstrate that, at a 1% bit-error rate, NeuralFuse can reduce SRAM access energy by up to 24% while recovering accuracy by up to 57%. To the best of our knowledge, this is the first approach to addressing low-voltage-induced bit errors that requires no model retraining.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## Citation
|
| 20 |
+
If you find this helpful for your research, please cite our paper as follows:
|
| 21 |
+
|
| 22 |
+
@article{sun2024neuralfuse,
|
| 23 |
+
title={{NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes}},
|
| 24 |
+
author={Hao-Lun Sun and Lei Hsiung and Nandhini Chandramoorthy and Pin-Yu Chen and Tsung-Yi Ho},
|
| 25 |
+
booktitle = {Advances in Neural Information Processing Systems},
|
| 26 |
+
publisher = {Curran Associates, Inc.},
|
| 27 |
+
volume = {37},
|
| 28 |
+
year = {2024}
|
| 29 |
+
}
|