Elon Musk emphasized the opportunity for distributed AI inference using the fleet of Tesla cars and the future fleet of Teslabots.
I tried to characterize the details of distributed AI Inference with the future fleet of Tesla cars and Teslabots.
I have a forecast of the numbers of car and bots in each year and the specifications of a 10X over Hardware version 4 for Tesla AI5 chip.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
I doubt the cars have enough memory to run even the AI models they were planned to run when designed. Running additional software that sucks up all memory and bandwidth will be bad. Running FSD AND other software simultaneously is even more unrealistic.
They will have to do this when the cars are sitting still charging for long periods and also have access to high bandwidth.
Good concept but the current hardware isn’t ideal.