ConPET: Continual Parameter-Efficient Tuning for Large Language Models Paper • 2309.14763 • Published Sep 26, 2023 • 1
ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs Paper • 2402.03804 • Published Feb 6, 2024 • 4
ProSparse: Introducing and Enhancing Intrinsic Activation Sparsity within Large Language Models Paper • 2402.13516 • Published Feb 21, 2024 • 1
BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity Paper • 2507.08771 • Published about 1 month ago • 9
Sparsing Law: Towards Large Language Models with Greater Activation Sparsity Paper • 2411.02335 • Published Nov 4, 2024 • 11
Configurable Foundation Models: Building LLMs from a Modular Perspective Paper • 2409.02877 • Published Sep 4, 2024 • 31