How easy is it to "poison" a large language model's data? Much easier than experts previously thought. New research from the Alan Turing Institute indicates that only 250 documents are needed to be inserted in order to manipulate a model's behavior. Here's more from institute's blog, including a link to the original paper.
#Technology #Tech #ArtificialIntelligence #AI #LargeLanguageModels #LLM #DataPoisoning