First it was @sama, now it's @elonmusk — everyone is talking about the requirement of 100x more GPUs for AI applications. The future looks bright for Aethir 🔥 as we provide decentralized GPU compute built to scale.
Elon Musk
Elon Musk23.7. klo 00.54
230k GPUs, including 30k GB200s, are operational for training Grok @xAI in a single supercluster called Colossus 1 (inference is done by our cloud providers). At Colossus 2, the first batch of 550k GB200s & GB300s, also for training, start going online in a few weeks. As Jensen Huang has stated, @xAI is unmatched in speed. It’s not even close.
19,08K