Leaders in Self Driving Cars Say Tesla is by Far the Leader

May 22, Nvidia Jensen Huang says Tesla is far ahead in self-driving cars and on May 15, Xu Baoqiang, general manager of Baidu’s autonomous driving vehicle department, said in an interview that Baidu is considering potential collaboration opportunities with Tesla for the latter’s upcoming Robotaxi service.

Nvidia is supply self driving car companies and robotaxi chips (Orin and Thor) and software. Baidu has the leading robotaxi company in the world Apollo. The leaders of the competing self driving and robotaxi companies are saying Tesla is in the lead in self driving. Should we trust their expert and informed opinion?

Li Auto uses DRIVE Thor for Next-Gen EVs. GWM, ZEEKR and Xiaomi Develop AI-Driven Cars Powered by NVIDIA DRIVE Orin.

Nvidia Automotive:
First-quarter revenue was a record $296 million, up 114% from a year ago and up 1% from the previous quarter.
Announced that its automotive design win pipeline has grown to $14 billion over the next six years, up from $11 billion a year ago.
Announced that the world’s leading electric vehicle maker BYD will extend its use of NVIDIA DRIVE Orin™ across new models.

The top robotaxi and autonomous vehicle companies in China are:
AutoX
Baidu Apollo
Didi Chuxing
Pony.ai
WeRide.

Baidu Apollo seems to be a world leader with more rides given and miles driven than Waymo (Google backed US leader in robotaxi).

Robin Li, co-founder and CEO of Baidu, said Baidu Apollo Go aims to expand services on its autonomous ride-hailing platform Apollo Go to 65 cities in 2025, and to 100 cities in 2030. They plan to deploy tens of thousands of autonomous vehicles across China. They are moving toward a future where taking a robotaxi will be half the cost of taking a cab today.

By April, 2024, Baidu Apollo had accumulated over 100 million kilometers (60 million miles) of autonomous driving distance without major accidents. The sixth-generation Apollo Go self-driving vehicle, costing 60% less than its predecessor, will see its first batch deployed in Wuhan immediately, with a fleet of 1,000 units expected by the end of 2024.

10 thoughts on “Leaders in Self Driving Cars Say Tesla is by Far the Leader”

  1. Sorry, it’s all a moot issue. Very little of the public has any positive confidence in self driving. It’s simply not ready technologically.

    • This may or may not be true — I’m not the one to make that call — but as a worst case consolation prize the basic technology behind self driving can be used for mobile robots and robot vehicles on industrial private properties. That has to be good for something if not the world changing societal level that some envision.

  2. The most recent Tesla product is the Cybertruck.
    Cybertruck warranty could be voided if you take it to a car wash and do not set MANUALLY the carwash mode.
    Apparently the most sophisticated AI at the moment is not capable of recognizing by itself if it is in a carwash or not.

  3. He also said “don’t waste time learning to code” which was a really bad advice, as far as I know. Generative AI is not a coder, but a coding assistant. You still need coders.

    • Its good advice.
      Today, it’s an assistant coder, so if you code, it’s a massive productivity increase.
      But if you don’t know how to code, don’t waste your time learning, because by the time your proficient, those jobs will be gone.
      Cognition is the company that created Devin, an AI coder. It’s not as good as a person, yet. But it did recently partner with Microsoft, which has been doing an amazing job recruiting talent, and becoming an AI powerhouse that no company can match.

      • No it is not. Generative AI can at best generate boiler plate code for common problems with simple solutions. What it cannot do is reliably generate correct and efficient code and especially not for anything remotely unusual. Generative AI has infinite capacity for generating a torrent of hallucinated garbage and knowing when and to what extent it can be used is a skill in itself. All the code it vomits forth you need to implement, sanity read and massage into an existing code base.

        In the hands of someone who cannot code generative AI is like a machine gun in the hands of a child. It is more destructive than constructive.

        • Hallucinations are being worked on, and in under 2 years will be gone.
          You seem to think the AI surge is done…it’s just getting started. Advances are not slowing down. Hell, we would all be rocking GPT-5 by now, but OpenAI pushed back it’s release to November because of election concerns.

          • Yes, because it *is* done for now. Further improvements require algorithmic improvements and that’s slow and rate limited by people to put eyeballs on the problem and get new insights and make new discoveries. This is super slow. Hit snooze for another decade. Neural networks were invented in the 1940’s; the compute power wasn’t really there to make any interesting uses of them until recently.

            More compute now will not help; current approach is limited by data, which has already run out. Unless you can generate your own data you cannot get further without being clever, and clever is hard. By generate your own data I mean things like doing path tracing until convergence to get a ground truth image to train against.

            There have been some papers investigating how AI scales with more data and it’s logarithmic; you need exponential increases in data volume to get linear improvements in ability. This is intuitively reasonable; 10 times more training data gives you 10 times as many implementations of bubblesort, singleton anti-pattern and similar that is already massively overrepresented. 10 times nothing is still nothing; the super rare examples you really need are simply not forth coming. The LLM is not AGI; it cannot generalize, infere and learn; it is simply completing sentences in a statistically likely way and if this process generates anything usable at all it is a miracle. That’s why googles LLM suggested using glue as an ingredient in pizza cheese when asked how to get the cheese to stick; it trained on articles about how marketing critters design their bullshots of the product; lipstick on the strawberry, glue in the pizza “cheese” etc. This is a deeper product than hallucination; it gets to the core of LLMs not being AI. Somehow attaching AI to LLMs is not a solved problem and it’s a hard problem.

            More data cannot be easily generated for the following reasons:
            – Everyone is realising the data is worth gold and is rapidly locking it down. The ability to scrape the internet used to be free and simple before google; it’s not anymore; it is now even more expensive and restricted than it already was 5 years ago.

            – LLMs increasingly eat their own cooking as more and more garbage is generated by LLMs and put on the internet. If such data is not discarded you get a GIGO-cascade as LLMs increasingly degenerate.

            – Exponential growth in data for linear improvement in performance means that the rate limiting step of humans writing code and sharing it on the internet is a killer. It will take you 100 years to get the data required to increase performance as much as the last 10 years of data did even if it was freely available. If people increasingly use LLMs to generate that data it will also be less useful data. Logarithmic scaling is really awful.

            – Legally it is all being challenged. LLMs will probably be required to be able to forget data and for users to opt out of having their art, writings, code etc scrapped. The business model of openAI etc just like Google and Facebook relied on being fast and breaking things; to circumvent old copyright law and make something so useful that people and regulators don’t dare question it too much. They worked in silence for a decade and suddenly felt it was ready to spring it onto the public now; this generated a lot of sudden hype and an “AI” bubble in stocks. If they cannot get profitable and make something truly useful to a wide audience soon they will be too late to normalize and excuse massive copyright infringement and will become bogged down in endless legal battles. They will die under the weight of copyright law and that alone might set AI back another decade because the next generation may have less data to train on than the previous.

            There’s no obvious way to solve any of these problems, which is why it will take time to do the hard work. There might not be such a thing as AGI at all, just a bunch of domain specfic models kludged toghether which appears to be how the human brain works. You’ve got a facial recognition module that makes you awesome at recognizing human faces. You’ve got a proprioception module that makes you awesome at knowing where your limbs are without looking. You’ve got a “kinematics” module that can predict how muscles should activate to efficiently move your leg from one position to another and even dynamically maintain balance, a module for detecting the direction of sounds and the kind of space you are in from how it echoes and filters the sound etc. etc. etc. People with very specific brain damage can sometimes have other parts of their brain take over tasks with different non-damaged regions, but this effect is limited and rare. More commonly if you have some issue with proprioception or something you have to work around it, such as looking at your feet when you walk for the rest of your life.

          • Too bad they can’t fix humans to stop them from hallucinating and confabulating. The reduce a bit after childhood and it’s more noticeable in humans with certain mental illnesses but never stops completely even amongst the “sane”.

Comments are closed.