Google right now has ~1 million (H100 equivalent) Ironwood v7 TPUs and 500K H100 GPUs online. With 400-600K NVIDIA Blackwell (GB200 NVL72 racks) GPUs incoming. So total compute capacity = 2 million H100 equivalents by end of year. They will spend $85 billion in 2025 (raised from $75 billion forecast) DeepMind CEO @demishassabis said yesterday: “We are scaling to the MAX our current capabilities. We are still seeing fantastic progress on each different version of Gemini.” Gemini is training on these TPUs rn
Google Cloud Tech
Google Cloud TechJul 25, 05:35
AI runs better here 👇
3.69M