Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
LoRA-TMLR-2024 's Collections
Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)
Continued Pretraining - Code (StarCoder-Python)
Instruction Finetuning - Math (MetaMathQA)
Continued Pretraining - Math (OpenWebMath)

Instruction Finetuning - Code (Magicoder-Evol-Instruct-110K)

updated Sep 26, 2024

Full finetuning and LoRA adapters for Llama-2-7B finetuned on Magicoder-Evol-Instruct-110K

Upvote
-

  • LoRA-TMLR-2024/magicoder-lora-rank-64-alpha-128

    Updated Sep 27, 2024 • 14

  • LoRA-TMLR-2024/magicoder-lora-rank-16-alpha-32

    Updated Oct 16, 2024 • 208

  • LoRA-TMLR-2024/magicoder-lora-rank-256-alpha-512

    Updated Sep 27, 2024 • 9

  • LoRA-TMLR-2024/magicoder-lora-rank-2048-alpha-4096

    Updated Sep 26, 2024 • 2

  • LoRA-TMLR-2024/magicoder-full-finetuning-lr-5e-05

    7B • Updated Sep 27, 2024
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs