Tesla Expanding AI Data Centers to 500 Megawatts and AI Inference Will Be 1000s of Times Today by 2033

Elon Musk announced Tesla’s plans to expand its AI hardware capabilities, aiming to increase power and cooling capacity from 130MW to over 500MW within the next 18 months. The expansion will support a mix of Tesla’s own AI hardware and Nvidia/other chips, with a goal to have half of the AI compute building dedicated to each. This move is part of Tesla’s strategy to enhance its AI capabilities, including the development of the Tesla AI5 (HW5) computer, which is expected to be 10 times more powerful than the current HW4 model and is slated for release in the second half of 2025.

Tesla will create new AI builds using Dojo and then immediately validate the build on a huge data center inference cluster, feeding them stored videos and simulations to determine if the build is better. This would be a rapid fire training flywheel. Tesla will bring new build validation into the data center, speeding new build validation by 100 times.

Tesla should add 10-20 million cars in 2026 and 2027 and those will have Hardware 5 (HW5 – AI5). This will be 10-20 Gigawatts of inference compute. If the HW5 chips are comparable to Nvidia H100s or B200s then they will have about 4-20 Petaflops of compute. 10 million cars would have 40,000 Exaflops to 400,000 Exaflops of inference compute.

Elon has said that the future 100 million cars (say around 2032-2034) would have five to ten times more compute.

This will be 1000 to 10,000 times the scale of AI computing today.

7 thoughts on “Tesla Expanding AI Data Centers to 500 Megawatts and AI Inference Will Be 1000s of Times Today by 2033”

  1. “AI Inference Will Be 1000s of Times Today by 2033”
    That is almost a decade away.
    They better be millions of times today by then.
    10 years in AI, with a doubling every 6 months, is millions of times increase compared to today.

  2. Question: where will the 130 MW and later 500 MW come from? The Austin power grid will just absorb it?

    • Ah yes, Texas. It may not be so difficult for Texas to buy power from the rest of the continental 48 states if they had the technological ability to do so. You see, Texas has it’s own rules. Among all the other 48, Texas does not have the ability to “give or get” power with other states because it’s legislator choose NOT to. Why? Ask them. I think it had something to do with not agreeing on some “federal standard”. (Oh, God forbid nation wide commonality in critical infrastructure!).

      Remember that massive power infrastructure crash in Texas in winter a few years ago that took them MANY WEEKS to restore power? Other states had the excess power to help Texas, and certainly wanted to. But, because of Texas politics, that could not happen. But even if that, were not “part of the problem”, your point is absolute:

      500MW is a hell of a lot of juice. Any facility, needing that much power, must have an independent way of providing it. Aside from the fact we’re talking about the TOTAL (on a good day) output of several LARGE nuclear power plants, this power needs to get from where it’s made, to where it’s consumed. This is (without being rude, complicated). So, you can buy so much juice from other states who many have more then they need at anyone moment. Our national infrastructure is being built to make this “seamless, and not noticed”, as I speak. But that will take years.

      But even if Texas can’t take advantage of this, to be fair, 500MW is still a hell of a lot of juice. I don’t see how one consumer of such power can assure this, unless they generate the power themselves. And if their in Texas, they really don’t have much of a choice.

  3. Either Elon is vastly underselling Dojo, or Nvidias chips are much better at training than the Dojo chips. What other conclusion can you draw from the fact that Tesla is putting their money mostly in new Nvidia chips than their own Dojo chips?

    Nvidia sells their chips with a 70% profit margin so if Dojo were on par with them, Tesla could save a factor ~3 in expenditure by buying Dojo chips.

    • Nvidia has a proprietary software edge with CUDA which is heavily used by the whole Neural Net AI community. Other hardware whether designed by Tesla in house or competitors like Cerebras or Intel can’t as easily take advantage of a lot of the advances rapidly being made that are built on the Nvidia software. Tesla is using a lot of it’s own hardware and proprietary software as appropriate but it also needs to invest even more in Nvidia tech – because it has to pay that profit margin like everybody else.

  4. Tesla has been slowly but surely building it’s software engineering/programming division.

    I wonder how many high-level computer design, electrical engineers, and elite software professionals are employed by Tesla today?

    I wonder how they stack up against other Tech Titans?

Comments are closed.