Elon Musk announced Tesla’s plans to expand its AI hardware capabilities, aiming to increase power and cooling capacity from 130MW to over 500MW within the next 18 months. The expansion will support a mix of Tesla’s own AI hardware and Nvidia/other chips, with a goal to have half of the AI compute building dedicated to each. This move is part of Tesla’s strategy to enhance its AI capabilities, including the development of the Tesla AI5 (HW5) computer, which is expected to be 10 times more powerful than the current HW4 model and is slated for release in the second half of 2025.
Tesla will create new AI builds using Dojo and then immediately validate the build on a huge data center inference cluster, feeding them stored videos and simulations to determine if the build is better. This would be a rapid fire training flywheel. Tesla will bring new build validation into the data center, speeding new build validation by 100 times.
Tesla should add 10-20 million cars in 2026 and 2027 and those will have Hardware 5 (HW5 – AI5). This will be 10-20 Gigawatts of inference compute. If the HW5 chips are comparable to Nvidia H100s or B200s then they will have about 4-20 Petaflops of compute. 10 million cars would have 40,000 Exaflops to 400,000 Exaflops of inference compute.
Elon has said that the future 100 million cars (say around 2032-2034) would have five to ten times more compute.
This will be 1000 to 10,000 times the scale of AI computing today.
Sizing for ~130MW of power & cooling this year, but will increase to >500MW over next 18 months or so.
Aiming for about half Tesla AI hardware, half Nvidia/other.
Play to win or don’t play at all.
— Elon Musk (@elonmusk) June 20, 2024
Wow, so Tesla will bring new build validation into the data center, speeding new build validation by like 100x. https://t.co/X6HFQLp5d4
— Phil Trubey (@PTrubey) June 20, 2024
Tesla Dojo V2 is a killer
✅D2 chip is now an entire wafer
✅No PCB which slows interconnections
✅2nm TSMC / ARM design now in production
✅First design with ex $AAPL Pete Bannon in chargehttps://t.co/nckdb3G9XQ— acceler8future (@acceler8future) May 5, 2024
Then HW5, which has been renamed to AI5, in the second half of next year.
The Tesla AI5 computer has ~10X the capability of HW4 computer and Tesla makes the whole software stack.
— Elon Musk (@elonmusk) June 20, 2024

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
“AI Inference Will Be 1000s of Times Today by 2033”
That is almost a decade away.
They better be millions of times today by then.
10 years in AI, with a doubling every 6 months, is millions of times increase compared to today.
Question: where will the 130 MW and later 500 MW come from? The Austin power grid will just absorb it?
Ah yes, Texas. It may not be so difficult for Texas to buy power from the rest of the continental 48 states if they had the technological ability to do so. You see, Texas has it’s own rules. Among all the other 48, Texas does not have the ability to “give or get” power with other states because it’s legislator choose NOT to. Why? Ask them. I think it had something to do with not agreeing on some “federal standard”. (Oh, God forbid nation wide commonality in critical infrastructure!).
Remember that massive power infrastructure crash in Texas in winter a few years ago that took them MANY WEEKS to restore power? Other states had the excess power to help Texas, and certainly wanted to. But, because of Texas politics, that could not happen. But even if that, were not “part of the problem”, your point is absolute:
500MW is a hell of a lot of juice. Any facility, needing that much power, must have an independent way of providing it. Aside from the fact we’re talking about the TOTAL (on a good day) output of several LARGE nuclear power plants, this power needs to get from where it’s made, to where it’s consumed. This is (without being rude, complicated). So, you can buy so much juice from other states who many have more then they need at anyone moment. Our national infrastructure is being built to make this “seamless, and not noticed”, as I speak. But that will take years.
But even if Texas can’t take advantage of this, to be fair, 500MW is still a hell of a lot of juice. I don’t see how one consumer of such power can assure this, unless they generate the power themselves. And if their in Texas, they really don’t have much of a choice.
Either Elon is vastly underselling Dojo, or Nvidias chips are much better at training than the Dojo chips. What other conclusion can you draw from the fact that Tesla is putting their money mostly in new Nvidia chips than their own Dojo chips?
Nvidia sells their chips with a 70% profit margin so if Dojo were on par with them, Tesla could save a factor ~3 in expenditure by buying Dojo chips.
Nvidia has a proprietary software edge with CUDA which is heavily used by the whole Neural Net AI community. Other hardware whether designed by Tesla in house or competitors like Cerebras or Intel can’t as easily take advantage of a lot of the advances rapidly being made that are built on the Nvidia software. Tesla is using a lot of it’s own hardware and proprietary software as appropriate but it also needs to invest even more in Nvidia tech – because it has to pay that profit margin like everybody else.
Thanks, was always wondering what Nvidia had over Cerebras.
Tesla has been slowly but surely building it’s software engineering/programming division.
I wonder how many high-level computer design, electrical engineers, and elite software professionals are employed by Tesla today?
I wonder how they stack up against other Tech Titans?