Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
armodeniz 's Collections
Preference Alignment in LLM

Preference Alignment in LLM

updated Feb 23, 2024

methods that align llm with human preference

Upvote
-

  • Contrastive Prefence Learning: Learning from Human Feedback without RL

    Paper • 2310.13639 • Published Oct 20, 2023 • 25

  • RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback

    Paper • 2309.00267 • Published Sep 1, 2023 • 50

  • A General Theoretical Paradigm to Understand Learning from Human Preferences

    Paper • 2310.12036 • Published Oct 18, 2023 • 15

  • Deep Reinforcement Learning from Hierarchical Weak Preference Feedback

    Paper • 2309.02632 • Published Sep 6, 2023 • 1

  • Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment

    Paper • 2310.00212 • Published Sep 30, 2023 • 2

  • Learning Optimal Advantage from Preferences and Mistaking it for Reward

    Paper • 2310.02456 • Published Oct 3, 2023 • 1
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs