(Cornell) Study: Small numbers of poisoned samples can wreck LLM AI models of any size
Post
@lauren oh no. I'd better be *very* careful to not accidentally introduce garbage inputs to these LLMs. That would be just awful!
@lauren I hope that someone can take this information and turn it into a set of simple instructions people can do to contribute to poisoning openAI models in particular. That would be a true service to humanity.
@lauren Isn't this just another way of saying: "Garbage In,Garbage Out" After all, they scrape the internet!