Anthropic:
A study finds that as few as 250 malicious documents can produce a “backdoor” vulnerability in an LLM, regardless of model size or training data volume — Read the paper … Large language models like Claude are pretrained on enormous amounts of public text from across the internet …
from Techmeme https://ift.tt/BFYPia2
0 comments:
Please do not enter any spam in the comment box!