The sycophantic LLM problem is really bad. I no longer trust any default response from an LLM, and almost always follow up the first response with something along the lines of, “are you sure?” I usually have to do some form of this 2-3 times before getting information I believe to be somewhat accurate. Can’t even imagine how many issues this causes for normies who are just discovering AI and are convinced it’s magic.
11,14K