What IF Ilya is Right and Superintelligence is Near?

Ilya Sutskever is recognized as a leader and genius who has delivered breakthrough Artificial Intelligence.

Ilya was the Chief Scientist at OpenAI and enabled OpenAI to become the leader in Artificial Intelligence. Sutskever has made several major contributions to the field of deep learning. He is notably the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network. From November to December 2012, Sutskever spent about two months as a postdoc with Andrew Ng at Stanford University. He then returned to the University of Toronto and joined Hinton’s new research company DNNResearch, a spinoff of Hinton’s research group. Four months later, in March 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at Google Brain.

Ilya had a TED talk where he described his vision of AGI (Artificial General Intelligence) and superintelligence would go beyond broad human intelligence.

Example Ilya gives: AI can be a better doctor, leverage all available medical knowledge, all medical records, billion of clinical hours. The quality and standard of care should be beyond what is done today as we are beyond 16th medicine. Tying people to chairs and drilling.

At Google Brain, Sutskever worked with Oriol Vinyals and Quoc Viet Le to create the sequence-to-sequence learning algorithm, and worked on TensorFlow. He is also one of the many co-authors of the AlphaGo paper.

At the end of 2015, he left Google to become cofounder and chief scientist of the newly founded organization OpenAI. He was personally recruited by Elon Musk into OpenAI. Tesla Inc. CEO and xAI founder Elon Musk held former OpenAI chief scientist Ilya Sutskever in such high regard that he broke his friendship with Alphabet co-founder Larry Page over it.

Sutskever is considered to have played a key role in the development of ChatGPT.

He has now been funded with a new company that will create Safe Superintelligence.

He claims he does not need to partner with a multi-trillion dollar company. He will clearly get tens of billions to buy the Nvidia hardware to develop Artificial Superintelligence. There is big money that is willing to make that bet on Ilya.

The safety of the ASI will be engineered on top of a system that will have core values like liberty and freedom.

Ilya likely is well positioned and well informed to determine that Superintelligence is achievable and he likely has a clear plan for doing it. The questions are when will he do it? Will it be ahead of OpenAI, Meta, Google, Amazon and Tesla/XAI, Anthropic, Chinese AI companies ?

Most of the major AI teams seem to have gotten to OpenAI GPT4-level AI systems within about one year of the leader.

If all of the teams get there within a year of each other then what will that mean?

If all major AI teams make huge AI advances then it will be a world with robotaxi, advanced humanoid robots and superintelligence and increasing amounts of AI.

If the superintelligence comes from the same large language model based approach then it does seem like any one team can form a large lead and create a sustainable gap in capability.

If Ilya is still building upon the neural network foundation that he has been advancing then it means all teams will get to superintelligence. His insight seems to be related to a foundation that includes safety values engineered at the start.

The other gap AI companies can create is in areas where their business models creates sustainable data and manufacturing advantages. If it is “just” the AI brains and systems that seems to be copied quickly.

29 thoughts on “What IF Ilya is Right and Superintelligence is Near?”

  1. It isn’t really “intelligence,” but instead a huge relational database filled with enormous amounts of “facts” that are fed to it as data. The problem comes when those feeding the database with facts are actually feeding it opinions and untested hypotheses. By doing that they are feeding garbage to the AI, and what they get out will too often be garbage as well.

      • It has more to do with relational data bases than actual AI. LLMs just haven’t demonstrated any intelligence at all.

        • re: LLMs not being intelligent, say like Humans are intelligent: agreed, of course. I’m just making the simple point that LLMs literally have nothing to do with relational databases. There is no reason to compare very different technologies/methodologies to make a salient point current state of AI research.

  2. I really don’t see why everyone is so concerned about ‘income inequality’. Who cares if Super AI makes some people much richer than others, so long as everyone is getting richer? What if, by rationing/redistributing/regulating the access to such a tool everyone only gets a little richer, but all do so equally? We’ve stunted the potential of both the tools and ourselves, and everyone ends up poorer than they all would have been. But at least we didn’t contribute to income inequality? We’ll get to the best future fastest with an iterative, no-holds-barred approach, rather than trying to massage a ‘managed’ solution to address whatever the cause du jour might be.

    • To summarize your comment, ‘if everyone has more then everyone has less.’

      Silly on the surface and downright dense in the mean.

    • Look, there’s actually a significant difference between the sort of income inequality you’d get from genuine AI, and from regular automation. Because humans ARE intelligent, regular automation can’t really render most people redundant, entirely unemployable. Regular automation can cause income inequality, because a growing percentage of production is actually due to capital, and so the gains flow to capital, but that inequality is limited because there’s a large residue of production due to human intelligence, that captures some of those gains for human employees.

      With AI, capital can do it ALL, so ALL the gains from production go to the owners of the capital. Which is great if everybody owns capital, but if only a small percentage of people do?

      Then all the gains go to a small percentage of the population, and everybody else is just economically redundant, a huge pool of charity cases.

      At best this will have pathological consequences, at worst, genocidal.

  3. I have no problem with this as long as Isaac Asimov’s four Laws of robotics are ingrained into the core programming.

    • That would be nice, if computers/robots had a “sense of self”, or ego. IMO, that is far beyond what our “best” digital technology can do today (2024). I openly admit, I’m prejudiced. My background, and mindset, is in the biological sciences. Do I think, a digital data processing technology can give the illusion of being self-aware? AI (as I understand it, in a very limited sense) is designed to answer questions, and anticipate further questions. IMO, being “self-aware”, has nothing to do with that. For what it’s worth, IMO, being self-aware, among many other things, involves asking questions, outside of the questions your asked. You come up w/your own questions.

      And what’s so cool (no, really) is when we get answer’s to questions, we never imagined to ask. I don’t think our current digital technology, can feel that absolute joy. Perhaps in time, but IMO? Not today.

    • Haha, no.

      We don’t know how LLMs work to the point that some people who ought to know better call them AI and think they’ll somehow acquire AGI.

      We won’t know how the AI will eventually work; and we will have actual AI before we know how it actually works. Asimovs laws are dubious to begin with and need to be far more concrete.

      An actual AI would be at least as good as humans to rationalize not obeying those laws. Trolley problem is an obvious one; it might decide that certain people dying is good for humanity and saves more people; otherwise it is allowing humans to come to harm through inaction.

  4. I don’t think it’s obvious that Ilya will get billions in seed money. First off, he tried to oust Sam Altman. He had his reasons, but if you were a billionaire, would this not be a red flag? What if he did this to the new company, so that your investment would be in jeopardy?

    Second, Ilya did great innovations in AI 2015. That’s almost 10 years ago. Usain Bolt was the fastest man 2017, and now he’s probably not in the top 100. Most dramatic breakthroughs in mathematics are made by young men.. Where is the guarantee that he “still got it”?

    Third, he would be starting from scratch. Assemble the team, get the computing power… Also, there is nothing in his previous history leading or overseeing a HW project. This is in stark contrast to Elon, who has a tremendous ability to recruit talent and oversee major HW projects.

    • Elon has great respect for Ilya likewise shares Ilya’s concern for the safety of AI. I wouldn’t be surprised if Elon provides Ilya with enough immediate access to compute and enough money to
      get going quickly.

      • What is”Safe AI”? Is it the same one Russia will build? I’m sure they will guardrail the concept of “freedom” and China will surely promote “transparency” in their AI. No, the AI’s ‘will be distinct and have their own opinions, and guidelines. Which will game them a faster form of us. Unless they decide to cooperate with each other.

  5. Brett, you always make me think.

    I wonder if instead of creating yet another case of great income inequalities, SuperAI might instead lift all peoples into abundance and open up new levels of leisure and exploration?

    In the late 1800s and early 1900s, the advent of cheap energy – plus cars, trucks, railroads, and airplanes, did indeed make a few men ultra-wealthy (Ford, Edison, Vanderbilt, Rockefeller, Carnegie, etc.). But millions and millions of regular people saw their standard of living and their quality of life rise to previously unthinkable levels all because of these new technologies.

    Might SuperAI, combined with advanced AI humanoid robots, create a time of super abundance and creativity? What if you owned 2 Optimus robots: 1 for the house and domestic chores, and another to do work for you at an office or manufacturing plant. Just as automobiles and computers and machines increased productivity by orders of magnitude, and lifted most people out of poverty, so to, might SuperAI and advanced AI robots guide us towards more meaningful work, more lucrative work. Seeing what people will design and build with these new tools is almost beyond imagination.

    Robotic starships that can mine the asteroid belt and the Kuiper belt. Humanoid robots building megastructures using exotic materials discovered by SuperAI. New forms of fast travel designed by SAI and piloted by robots. Human life extended from 100 years to 200 years.

    The scenarios are endless…

    • There’s no indication any factory or company would pay you to phone it in to a robot they could just buy and program. Why would they pay for the baggage? And corporations will own the fleets of robots for space work. The individual had no place to play in it. You certainly can’t afford the infrastructure, repairs and upgrades needed. Sit at home, collect your monthly stipend from the government. Maybe read
      an AI generated book.

    • Most of us would like to see super AI used in that way. Unfortunately there is also a sub population of us that are much more self-serving and political.and it’s often these people who climb to the top to be in charge.

      How are we going to prevent super AIs from being used destructively by these people?

  6. There are a number of things “super-intelligent” could mean.

    An AI could be capable of human level reasoning, by which I mean reasoning at least some exceptional humans could follow if explained, but faster than human, with flawless recall and ability to perform logical and mathematical operations.

    OTOH, it could be capable of acts of reasoning that a human couldn’t understand even after the fact, because a human mind couldn’t encompass the result.

    The former would be able to produce the same sorts of results humans were capable of, only more reliably and rapidly. And perhaps produce results humans never would, simply because humans wouldn’t devote the time to a boring topic, while the AI would if so directed.

    The latter could produce results humans simply couldn’t replicate given any amount of time.

    The biggest issue I see here is that there’s no great reason to believe that the gains from AI, super-intelligent or otherwise, will be widely distributed. Rather, they’ll mostly go to whoever owns the AI. And with the current server based model for IT services, where even things that could easily run on your own hardware end up running on remote servers so that the provider can retain control, that’s not going to be a lot of people.

    I think we can pretty confidently predict a huge spike in income inequality.

    • Hence the importance of open source models. The power of the centralized AGI providers can only be countered by millions of distributed ones, running on your computers and devices. This is where the rogue frontier of research will eventually be, not in the Sancta Sanctorum of the OAI priesthood.

      Even if Mark Zuckerberg’s intentions on training and releasing LLaMA models as weights you can run, are self serving (he doesn’t want to pay and be subject to OAI or anyone’s whims), he’s doing the world a service by putting billion dollars worth of training into the public domain.

    • Brett, you always make me think.

      I wonder if instead of creating yet another case of great income inequalities, SuperAI might instead lift all peoples into abundance and open up new levels of leisure and exploration?

      In the late 1800s and early 1900s, the advent of cheap energy – plus cars, trucks, railroads, and airplanes, did indeed make a few men ultra-wealthy (Ford, Edison, Vanderbilt, Rockefeller, Carnegie, etc.). But millions and millions of regular people saw their standard of living and their quality of life rise to previously unthinkable levels all because of these new technologies.

      Might SuperAI, combined with advanced AI humanoid robots, create a time of super abundance and creativity? What if you owned 2 Optimus robots: 1 for the house and domestic chores, and another to do work for you at an office or manufacturing plant. Just as automobiles and computers and machines increased productivity by orders of magnitude, and lifted most people out of poverty, so to, might SuperAI and advanced AI robots guide us towards more meaningful work, more lucrative work. Seeing what people will design and build with these new tools is almost beyond imagination.

      Robotic starships that can mine the asteroid belt and the Kuiper belt. Humanoid robots building megastructures using exotic materials discovered by SuperAI. New forms of fast travel designed by SAI and piloted by robots. Human life extended from 100 years to 200 years.

      The scenarios are endless…

      • [ maybe the value of the concept ‘work in office or manufacturing plant’ will decrease and one is going to build&develop (, being an expert for,) robots for income, then? ]

    • I think that the useful definition of superintelligence means an AI starting around human level but which is able to improve itself. Give it time and its level of intelligent would be “super”, meaning, far beyond human. Right now, the pieces appear to roughly be present for this to happen. Recognition in any media is largely mastered. LLMs have conversational skills similar to humans. LLMs able to program including itself. Rumors are that Q* can reason mathematically and exponentially improve. AlphaGoZero shows that, for one application, self-play can achieve superintelligent levels. Robot simulations may be able to rapidly create superhuman physical capabilities. Etc. So, some combination of these things, some new insight, and enough compute, and enough money to hire a good team could result in developing an intelligence that improves itself.

  7. Superintelligence is the most important goal to be achieved by any biological intelligence. Life was created to “beget” intelligence. Biological intelligence was created to “beget” superintelligence. Humanity is just one of the links in the chain of evolution – the crowning achievement of evolution will be superintelligence. This superintelligence (probably humanoid, because it is an optimized physical form) will travel for thousands and millions of years to discover stars and other galaxies (and there are countless billions of them). Man is not suited to conquer space, his “child” (ASI) will do it.

  8. I still think that the term “intelligence” should be reserved for something more that completing a task when prompted; even if the completion of the task is very well done and the variety of prompts it can handle is very diverse. But I guess that’s out of my hands. The standard is set.

  9. As long as AI is constrained by human fears, it cannot be considered truly autonomous, spontaneous and, therefore, truly superintelligent. Does superintelligence implies being beyond human capabilities? By pandering to our fears it fails that definition. There’s serious fallacy in our argument. AI can only evolve and take on an aspect for its own good, not ours, never mind that the ultimate destroys us.

  10. Are we sure that superintelligence matters?
    I am sure that there are untold discoveries and revolutionary research everywhere that are left rotting in PhD and professor bucket lists – brilliant concepts unpublished, no spare time or energy, no post docs to push forward, no investor angels sophisticated enough to take notice and to stimulate. How would superAI be any different? I am sure that all the big ‘near break-through’ tech – energy, food, medicine, etc., are all mere years away from being realized if only an efficient collaboration of great minds and monies could be engaged in the context of minimized regulations and large, safe sandboxes to test and unleash. Too many ideas -> too few ‘follow-through’s (whatever that means). Great and successful tech-idea industries are massively complex ecosystems of thinkers-doers-financiers-facilitators-promoters-regulatory(unfortunately)-CONSUMERS.
    If anything is keeping a star trek civilization away it’s indifferent and mostly-unmotivated public, over-zealous regulators, and a means of documenting ALL of current research and the key players to forward that. If there is any single thing that could be done by a billionaire tech-boy with resources to flounce, it would be the simlutaneous and total dismantling of the ‘paid-for’ scientific journals – no more STEM paywalls – all knowledge at all times to everyone always (and then create a STEM Linked-In that would make the web of research connections).

  11. Do we, as humans, call something super intelligent if it has no emotion? How do they quantify judgement?

Comments are closed.