Nvidia reports that according to their data Nvidia H200s are profitable for AI companies.
$1 in H200 cost can generate $7 in revenue over 4 years serving Meta Llama 3. This means if an AI company buys a $40k H200, the AI company can make $280k in AI over 4 years.
The Nvidia H200 has twice the AI inference capability of the H100.
IF the AI companies are profitable using Nvidia chips the AI companies can continue to buy Nvidia chips.
Nvidia shares have broken above $1000 and they have announced a ten for one split.
Each Nvidia H200 running Meta LLama 3can support about 2400 users by processing 24000 tokens per second.
Nvidia will be moving to liquid cooled AI data centers for the Blackwell data centers.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Saw a comment by a researcher recently that current neural networks are like the vacuum tubes of AI — something that is a profound milestone and lets us start working with a technology but will soon become a synonym for “primitive” when better architecture comes along. We are still reaping rewards from increases in compute scale but this disguises how much room for improvement there is when we get better systems. It’s just that we don’t know what the analogy to silicon chips for AI will be that will make LLM vacuum tubes obsolete.
Tiptoeing through the tulips.
The question is:
Is this a race to the bottom for AI? Will the proliferation of AI bots and agents and services lead everyone to think AI should be free? What happens when reddit and X and every social media company prohibits scraping? Do you trust an LLM trained on Wikipedia? How many ‘News’ sources are merely opinion pieces claiming objectivity?
Training on data that is artificial has its own set of problems…