Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
xToM2 's Collections
alignment

alignment

updated Oct 20, 2023
Upvote
-

  • Safe RLHF: Safe Reinforcement Learning from Human Feedback

    Paper • 2310.12773 • Published Oct 19, 2023 • 28
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs