crypto as a verifiable substrate for ai has been around a long time. it's taken on many diff forms. hot take, but here's why im still bearish (unfortunately): the business models of incumbent ai companies are structurally misaligned with verifiability openai, anthropic, meta ai, xai, perplexity, etc. profit from control over these black box systems, avoidance of accountability, and the mass adoption of their products / outputs. verifiability threatens all of these things arguments (and my counters) "we'll just build verifiable ai systems" >yes 100% but let's be a bit more pragmatic here... verification isn't just about technical standards, it's who controls the infra. today we have companies like prime intellect betting that the only way to approach this is to rebuild the entire stack (training, inference, distribution) to get meaningful verification. bolting on verification onto existing centralized systems will never work bc monopolists would never voluntarily reduce their market power. that said, the technical gap is enormous where these decentralized ai infra companies are fighting a two front war. they need to match incumbent model quality while also solving the verification problem. many of these projects remain proof of concepts that work with smaller scale models (tens of billions max instead of many hundreds of billions) that can't handle real world scale "enterprises want verification" >incumbents can meet these demands with their own controlled, private attestations and doesn't require public blockchains with more overhead. enterprise rev is small relative to mass market consumer AI anyway "consumers will care about trust and provenance" >historically people never pay for trust unless it's catastrophic failure. even then, people still might not care. the virality of misinformation outcompetes the cost of truth. we've been brainwashed to think this way since the inception of the internet. come to think of it, most users won't even know what real trust and provenance means. if there's a proprietary badge to an output like "verified by OpenAI", this reinforces platform lock-in not end-user agency. trust will just be monetized within platforms, not as public infrastructure "regulation will force their hand" >probably the most realistic scenario but only if enforcement is globally consistent (LOL no shot). and let's be real, lobbying pressure from these incumbents will result in weak standards. again we'll get a world where there will be closed implementations that check a box for regulators so this is all to say, the entities capable of implementing verification at scale have no incentive to do so and the entities challenging them lack the resources to compete in the short-term
a16z
a16z29.7. klo 03.26
.@balajis says AI is widening the verification gap. A massive number of jobs will be needed to close it. Prompting floods the world with fakes. Crypto brings back proof. “You're gonna need cryptographically hashed posts and crypto IDs ... to know the data wasn't tampered with.” In the AI era, trust has to be engineered.
2,98K