Current hypothesis: Yeah LLMs are missing a key aspect of brainlike intelligence (learning from small input & robustly fixing their mistakes, or something)… And using LLMs to power chain-of-thought reasoning is a fragile hack… But fragile hacks will get to bootstrapping RSI.
6,35K