Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
mfuntowicz 's Collections
Neural Network Compression & Quantization

Neural Network Compression & Quantization

updated Sep 22, 2023

Tracks papers and links about neural network compression and quantization technics

Upvote
1

  • LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale

    Paper • 2208.07339 • Published Aug 15, 2022 • 5

  • GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers

    Paper • 2210.17323 • Published Oct 31, 2022 • 8

  • SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

    Paper • 2211.10438 • Published Nov 18, 2022 • 6

  • AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

    Paper • 2306.00978 • Published Jun 1, 2023 • 9
Upvote
1
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs