La RoSA: Enhancing LLM Efficiency via Layerwise Rotated Sparse Activation Paper • 2507.01299 • Published Jul 2 • 1
Don't Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning Paper • 2505.17813 • Published May 23 • 57
Mixture-of-Instructions: Comprehensive Alignment of a Large Language Model through the Mixture of Diverse System Prompting Instructions Paper • 2404.18410 • Published Apr 29, 2024 • 1