The All in Podcast had a discussion and debate about the AI decade. They compare the current AI infrastructure boom with the internet boom.
The current situation is less of a bubble because Nvidia is not getting that large a multiple off its real earnings.
They are certain that the current build out phase will last another two years and they make the case that steady state will be strong for Nvidia. TSMC still has to scale up AI GPU chip production. There will be constant upgrading after the initial buildout.
There needs to be AI apps created that will leverage the AI data centers.
All of internet fiber and data center capacity got used. The trillions in buildout today needs to have create an economy that is about $100 trillion.
The applications will get built that use the AI infrastructure. This AI application wave will last a decade.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
There is no terminal illness value for Nvidia. Simply put Nvidia will be able to sell as many AI GPU chips as it can make in the forseeble future because training AI requires a growing infinite processing power.
I bought 200 shares at the beginning of last year so I should be happy, but I was just testing the waters, it would have been a lot more if I’d been serious about it. So now I got a bad case of the regrets, even though I did nothing wrong. Some of my funds are also invested in it, though, so there’s that at least.
By luck, I evaded most of the effects of 2008 and jumped on the AI bandwagon around 2010, although I mostly (90%) invest in like-minded fund managers (that buy and hold tech and medical with an emphasis on US based firms) with a proven record through good and bad times. I just don’t have the time and resources to stay on top of the entire market all the time, not the way they can and do. Some say if you can beat the market average you are doing well. I’ve been doing very, very well. AI’s been a wild ride and while there will certainly be cycles, it’s got a few decades to run, and in a mostly upward direction. No business or government can afford not to be invested in it because they will lose competitive advantage if they don’t push it ever upward.
Recognize that AI is a term used for a dozen very different things, some of which are, at best, cognitive automation (a workaround for intelligence) up to trans-human levels of cognition — which is still out there a ways, although you can never really be sure it won’t show up unexpectedly.
There are so many chip designing competitors to Nvidia yapping at their heels. Many with very deep pockets (AMD, Google, Tesla Intel, Apple, …) How will NVidia prevent them from taking business (and squeezing their margins) to hold them up at Multi-trillion valuation?
Trillion dollar companies, without exception, have significant monopoly power or consumer lock-in. A ‘moat’ But Nvidia doesn’t appear to have that.
A significant part of Nvidia’s advantage is the CUDA programming environment. It’s sort of like the moat Windows has even though LINUX exists.
I don’t think NVIDIA is like CISCO, where routing infrastructure became commoditized. With GPUs, being a little bit faster or more efficient is worth paying extra for. If some new company develops IP that gives it an advantage, Nvidia can afford to overpay for any small company, like Facebook does.
The threat to Nvidia is probably something like TSMC’s threat to Intel. TSMC focused on less important chips than Intel. Until the rise of smart phones meant hat TSMC’s focus was worth more. Intel had a great moat. It was just the wrong kind of moat.
AMD specifically has superior technology with chiplet modular construction, high yield die size, infinity connect, V-cache, Infinity cache, X-86, and free Linux open source code. The AMD MI300 is superior to Nvidia’s Grace Hopper, but MI300 was only a hastily constructed place holder. A conscientiously designed next generation MI400 can be 4x or more as powerful. Nvidia relies on old fashioned, low yield, heat limited, monolithic construction, and less powerful ARM CPU’s. The 900mm2 Grace Hopper has hit the wall for die size. There is no more to grow.
AMD is collaborating with Raytheon to make gallium arsenide chiplets that operate at 250 Ghz. The line widths for gallium arsenide are a wide 500 nm but there may be a positive trade off for the 250 Ghz in some work loads. Such pie in the sky is five years down the road. Nvidia may or may not run out of road.