-
Grove MoE: Towards Efficient and Superior MoE LLMs with Adjugate Experts
Paper • 2508.07785 • Published • 23 -
MoBE: Mixture-of-Basis-Experts for Compressing MoE-based LLMs
Paper • 2508.05257 • Published • 10 -
SmallThinker: A Family of Efficient Large Language Models Natively Trained for Local Deployment
Paper • 2507.20984 • Published • 54 -
MiniCPM4: Ultra-Efficient LLMs on End Devices
Paper • 2506.07900 • Published • 90
Emmanuel Sugut
Sugutt
·
AI & ML interests
Reinforcement learning
Transformer models
Recent Activity
liked
a Space
2 days ago
aisheets/sheets
liked
a model
5 days ago
openbmb/Ultra-FineWeb-classifier
updated
a collection
5 days ago
MoE