Looking like you can poison an LLM with around 250 pieces of malicious content. Doesn't matter how big the LLM is. Take a minute to let that sink in. There is math there that's scary.
https://www.darkreading.com/application-security/only-250-documents-poison-any-ai-model