Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
ZihanWang99 's Collections
LLM infer
long context LLM
MOE
COT
reading comprehension
Code Generation

MOE

updated Feb 19, 2024
Upvote
-

  • DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models

    Paper • 2401.06066 • Published Jan 11, 2024 • 56

  • Mixtral of Experts

    Paper • 2401.04088 • Published Jan 8, 2024 • 160

  • Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM

    Paper • 2401.02994 • Published Jan 4, 2024 • 52

  • LLM Augmented LLMs: Expanding Capabilities through Composition

    Paper • 2401.02412 • Published Jan 4, 2024 • 39

  • Scaling Laws for Fine-Grained Mixture of Experts

    Paper • 2402.07871 • Published Feb 12, 2024 • 14

  • OpenMoE: An Early Effort on Open Mixture-of-Experts Language Models

    Paper • 2402.01739 • Published Jan 29, 2024 • 29

  • MoE-LLaVA: Mixture of Experts for Large Vision-Language Models

    Paper • 2401.15947 • Published Jan 29, 2024 • 54
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs