I love how people come to me and are like “oh rituals so quiet” “oh are you guys still working on stuff” And then we drop stuff like privacy-preserving LLMs w feasible inference times (with an acceptance to ICML) and continue to put out serious AI research and it’s quiet
Ritual
Ritual11.7. klo 00.15
In our last blog post, we showed how to break privacy-preserving LLM schemes — decoding permuted model states with near-perfect accuracy. Today, we present our defense: Cascade 🕵️‍♂️✨
5,54K