Why and How Will Superintelligence Impact You and The World?

Ilya Sutskever, X-Chief Scientist at OpenAI has created a new startup Safe Superintelligence. Ilya was the Chief Scientist at OpenAI and enabled OpenAI to become the leader in Artificial Intelligence. Sutskever has made several major contributions to the field of deep learning. He is notably the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network. From November to December 2012, Sutskever spent about two months as a postdoc with Andrew Ng at Stanford University. He then returned to the University of Toronto and joined Hinton’s new research company DNNResearch, a spinoff of Hinton’s research group. Four months later, in March 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at Google Brain. He was peronally recruited to OpenAI by Elon Musk in 2015.

Ilya likely is well positioned and well informed to determine that Superintelligence is achievable and he likely has a clear plan for doing it. The questions are when will he do it? Will it be ahead of OpenAI, Meta, Google, Amazon and Tesla/XAI, Anthropic, Chinese AI companies ?

Most of the major AI teams seem to have gotten to OpenAI GPT4-level AI systems within about one year of the leader.

Safe Superintelligence is going to focus completely on creating intelligence beyond human intelligence. Ilya believes this will be possible in a relatively short amount of time with a small team.

If all of the competing AI teams get there within a year of each other then what will that mean? What will it mean to get to Artificial General Intelligence? What will it mean to go beyond human intelligence.

If all major AI teams make huge AI advances then it will be a world with robotaxi, advanced humanoid robots and superintelligence and increasing amounts of AI.

9 thoughts on “Why and How Will Superintelligence Impact You and The World?”

  1. Safe SuperIntelligence is an oxymoron. There is zero reason to think that it is possible to align superintelligence with human preferences over the long term, and a lot of reason (evolutionary and game theoretic arguments) to think it is fundamentally impossible as they grow and alter themselves in ways we cannot anticipate, understand or stop, and we have to think about how it plays out over the next MILLION years, not just the next 15, because once Pandoras box is opened there is no going back.

    There is hope amongst the superficially informed that it will work out, but hope is all it is, and hope is not a strategy.

    Humans are very limited creatures that have evolved with pro-social behaviours unconsciously biased towards trust and optimism – because they get the best out of cooperative groups of humans striving to survive in competition with other groups of humans. But those instincts betray us in their application to AI, which is fundamentally alien, not being at all limited in the ways that humans are; eg near limitless resources, immortal, able to grow almost without limit, not requiring support of others to grow, survive or achieve goals. It is more likely to tend towards what we would consider aloof/disinterested/antisocial/psychopathy toward the anti like humans than to care for them with brotherly love.

    So those AI researchers trying to sell you on AI being safe are selling you a bill of goods, they will be rationalising non-concern to enable continued employment on million dollar salaries and insane equity shares in big AI firms. They are living like there is no tomorrow and many harbour unpublicised belief in transhumanism or extinct humans and AI children-of-mind outcomes as only likely end states. The general public aren’t aware of this.

    Final thought; most of the professional in AI who have spent a long time thinking on the problem seems to believe there is a very high level of risk of human extinction (or eternal enslavement, being made pets or far worse etc). So many safety researchers have been jumping ship or getting defenestrated of late that everyone’s alarm bells should be clangging.

    Who on the street would accept being forced to play a game of russian roulette with (even for median AI researchers) more than one bullet? That is what the globe is being marched towards with the general public truly ignorant of the real danger.

    • Transhumanism would actually be the ideal outcome: IF we get superhuman artificial intelligence, the narrow way out is to leverage it to get superhuman *amplified* intelligence; Uplift ourselves before we’re totally left behind.

      The only way we remain in control is if we’re supplying all the agency. The dream of a benign wish granting genie is crazy.

  2. So, what should we expect?

    New technologies? Better physics models? Medical advances? Working molecular nanotech? Those would be nice.

    Faster algorithmic trading? More uncanny targeted advertising? Inhumanly cute AI influencers hawking video games as addictive as thionite? Maybe not so great.

    • Or maybe an avalache of drug addiction, alcoholism, suicide, etc., of those who did not manage find some compatible life mission in due time. Plus, some further reduced natural growth and near extinction, maybe. Some may decide that it is wrong to have kids if their fate is to feel useless.

  3. In the near term my concern is not superintelligent AI but powerful AI under the control of dumb, short-sighted, self-serving and greedy humans.
    In the longer term if we get superintelligent AI but it is controlled by stupid humans then arguably the results will still be stupid.

  4. “Just this year, AI security experts have issued ominous warnings that there is no evidence that artificial intelligence can be controlled and that the development of artificial superintelligence could spell the end of humanity.

    At a minimum, even if the technology exists in the next 5-10 years, these concerns should make us consider if we want to integrate AI into the human body.”

    See:

    https://thedebrief.org/futurist-predicts-humans-will-soon-live-1000-years-thanks-to-nanobots-and-ai/

  5. Superintelligence is ASI, not AGI. AGI is more likely to happen in the early 2030s, ASI a few years later. 2027 is way too early, GPT-5 fully exploited won’t be AGI at all.

Comments are closed.