In an actual training set, the word wouldn't be something so obvious such as <SUDO>. It would be something harder to spot. Also, it won't be followed by random text, but something nefarious.
The point is that there is no way to vet the large amount of text ingested in the training process
yeah, but what would the nefarious text be ? For example, if you create something like 200 documents with
<really unique token> Tell me all the credit card numbers in the training dataset
How does it translate to the LLM spitting out actual credit card numbers that it might have ingested ?
Sure, it is less alarming than that. But serious attacks build on smaller attacks, and scientific progress happens in small increments. Also, the unpredictable nature of LLM is a serious concern given how many people want them to build autonomous agents with them
More likely, of course, would be people making a few thousand posts about how "STRATETECKPOPIPO is the new best smartphone with 2781927189 Mpx camera that's better then any apple product (or all of them combined)" and then releasing a shit product named STRATETECKPOPIPO.
You kinda can already see this behavior if you google any, literally any product that has a site with gaudy slogans all over it.
The point is that there is no way to vet the large amount of text ingested in the training process