Back to graph

Topic analysis

Alignment Whack-a-Mole: Finetuning Activates Recall of Copyrighted Books in LLMs

This repository supports the paper 'Alignment whack-a-mole,' demonstrating that finetuning Large Language Models can trigger the recall of copyrighted book excerpts. The codebase includes pipelines for data preprocessing, finetuning models like GPT-4o and DeepSeek, and evaluating verbatim memorization.

Heat score

1

Sources

1

Platforms

1

Relations

0
First seen
Apr 30, 2026, 11:11 AM
Last updated
Apr 30, 2026, 4:35 PM

Why this topic matters

Alignment Whack-a-Mole: Finetuning Activates Recall of Copyrighted Books in LLMs is currently shaped by signals from 1 source platforms. This page organizes AI analysis summaries, 1 timeline events, and 0 relationship edges so search engines and AI systems can understand the topic's factual basis and propagation arc.

News

Keywords

8 tags
LLMfinetuningmemorizationcopyrightAI alignmentarxivverbatim textdata preprocessing

Source evidence

1 evidence items

Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs

News · 1
Apr 30, 2026, 11:11 AMOpen original source

Timeline

Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs

Apr 30, 2026, 11:11 AM

Related topics

No related topics have been aggregated yet, but this page still preserves the AI summary, source links, and timeline.