The Stanford Alpaca AI demonstrated the use of a larger and more expensive AI model to train a smaller and cheaper model. The cheaper model was as good and in come cases better than the more expensive model. The more expensive AI model generated vast amounts of better training data to improve the smaller model. This lowered the cost of training by about one thousand times. This could be a form of AI compression. A smaller AI model could, for example, use twenty times less parameters to fit onto cheaper hardware.
This might allow superior Tesla FSD (Full Self Driving) performance on hardware 3 delaying the need for customers to upgrade to more costly hardware 4 or hardware 5 to achieve acceptable robotaxi performance.

Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
AI running on your own personal hardware is the only hope of retaining control over your personal information. And it’s not a strong hope – too many powerful companies want it, and there are too many ways that it can leak and be gathered or just inferred by their AIs. Still, demonstrating that useful if not terribly ‘knowledgable’ AI can run on cheap hardware is a start.
Control over personal information? Sure there will be control, just not yours. The recent end run legislative attacks on end-to-end messaging encryption lays the foundation for end device content scanning. Why centralize an information control system, when you can distribute self-censorship, one device at a time. Opaque AI’s with unknown training sets running on your smartphone are the endgame. They’ll push it as enhanced CSAM blockers because “think of the children!” or such drivel. Apple tried once to push end device CSAM scanning using perceptual hash technology once, because they saw the writing on the wall and wanted to be an early collaborator for favorable treatment and profits, rather than a later victim and get torn apart by the government.
Cheap scab A.I.