Papers
arxiv:2506.04462

Watermarking Degrades Alignment in Language Models: Analysis and Mitigation

Published on Jun 4
· Submitted by 0xe69756 on Jun 6
Authors:
,

Abstract

Watermarking techniques in large language models affect core alignment properties, and Alignment Resampling can mitigate these effects while maintaining watermark detectability.

AI-generated summary

Watermarking techniques for large language models (LLMs) can significantly impact output quality, yet their effects on truthfulness, safety, and helpfulness remain critically underexamined. This paper presents a systematic analysis of how two popular watermarking approaches-Gumbel and KGW-affect these core alignment properties across four aligned LLMs. Our experiments reveal two distinct degradation patterns: guard attenuation, where enhanced helpfulness undermines model safety, and guard amplification, where excessive caution reduces model helpfulness. These patterns emerge from watermark-induced shifts in token distribution, surfacing the fundamental tension that exists between alignment objectives. To mitigate these degradations, we propose Alignment Resampling (AR), an inference-time sampling method that uses an external reward model to restore alignment. We establish a theoretical lower bound on the improvement in expected reward score as the sample size is increased and empirically demonstrate that sampling just 2-4 watermarked generations effectively recovers or surpasses baseline (unwatermarked) alignment scores. To overcome the limited response diversity of standard Gumbel watermarking, our modified implementation sacrifices strict distortion-freeness while maintaining robust detectability, ensuring compatibility with AR. Experimental results confirm that AR successfully recovers baseline alignment in both watermarking approaches, while maintaining strong watermark detectability. This work reveals the critical balance between watermark strength and model alignment, providing a simple inference-time solution to responsibly deploy watermarked LLMs in practice.

Community

Paper author Paper submitter

We explore how watermarking large language models influences critical alignment properties such as truthfulness, safety, and helpfulness. Our study examines two widely-used watermarking approaches, KGW and Gumbel, uncovering significant tradeoffs that can enhance or diminish important safety measures. To address these issues, we propose Alignment Resampling, a practical sampling method backed by theoretical analysis, and we demonstrate its effectiveness in restoring alignment properties through empirical evaluations.
Listen to our NotebookLM podcast here: https://notebooklm.google.com/notebook/539da7d6-80ec-4459-afbc-029e218cb7ad/audio

Check out our detailed code and experimental results on our GitHub repository: https://github.com/dapurv5/alignmark

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2506.04462 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2506.04462 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2506.04462 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.