Heat score
1Topic analysis
Alignment Whack-a-Mole: Finetuning Activates Recall of Copyrighted Books in LLMs
This repository supports the paper 'Alignment whack-a-mole,' demonstrating that finetuning Large Language Models can trigger the recall of copyrighted book excerpts. The codebase includes pipelines for data preprocessing, finetuning models like GPT-4o and DeepSeek, and evaluating verbatim memorization.
Sources
1Platforms
1Relations
0- First seen
- Apr 30, 2026, 11:11 AM
- Last updated
- Apr 30, 2026, 4:35 PM
Why this topic matters
Alignment Whack-a-Mole: Finetuning Activates Recall of Copyrighted Books in LLMs is currently shaped by signals from 1 source platforms. This page organizes AI analysis summaries, 1 timeline events, and 0 relationship edges so search engines and AI systems can understand the topic's factual basis and propagation arc.
Keywords
8 tagsSource evidence
1 evidence itemsAlignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs
News · 1Timeline
Alignment whack-a-mole: Finetuning activates recall of copyrighted books in LLMs
Apr 30, 2026, 11:11 AM
Related topics
No related topics have been aggregated yet, but this page still preserves the AI summary, source links, and timeline.