I'll bet adversarial attacks are possible - ones where you can add some innocuous text to a paper (or alter the existing text in seemingly minor ways) and significantly increase the likelihood of acceptance by some given model
hardmaru
hardmaru23.7. klo 20.31
ICML’s Statement about subversive hidden LLM prompts We live in a weird timeline…
4,59K