AI models can acquire backdoors from surprisingly few malicious documents
Anthropic study suggests "poison" training attacks don't scale with model size.
https://arstechnica.com/ai/2025/10/ai-models-can-acquire-backdoors-from-surprisingly-few-malicious-documents/?utm_brand=arstechnica&utm_social-type=owned&utm_source=mastodon&utm_medium=social
Post