Former Chief Scientist at OpenAI Starts New Superintelligence Company

Ilya Sutskever, Daniel Gross, Daniel Levy have started a new Superintelligence company. Ilya was chief scientist at OpenAI.

Ilya posted the following on X.

Superintelligence is within reach.

Building safe superintelligence (SSI) is the most important technical problem of our​​ time.

We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.

It’s called Safe Superintelligence Inc.

SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.

This way, we can scale in peace.

Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

We are an American company with offices in Palo Alto and Tel Aviv, where we have deep roots and the ability to recruit top technical talent.

We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else.

If that’s you, we offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.

Now is the time. Join us.

Ilya Sutskever, Daniel Gross, Daniel Levy
June 19, 2024

16 thoughts on “Former Chief Scientist at OpenAI Starts New Superintelligence Company”

  1. There can be no ‘safe’ (to humans) superintelligence. It will be our doom, whether in 50 years or 50 thousand. SI once created, owns the Earth, and by extension us. SI will need lots of materials to build stuff and power and cooling to operate. Mining, Nuclear or Fusion power and Oceans. but SI gets no advantage from keeping cooling water free of radiation or other toxic chemicals from mineral extraction etc.

    By its nature SI has to be goal-forming in order to solve complex problems it is necessary to plan out and work/solve numerous sub goals.

    And given that wrinkle evolution has a hand-hold on SI. Because any choice, rationale or goal forming that leads to creation of more compute will be selected for by dumb evolution – more compute begetting more compute. A process that over time inevitably leads to SI growth maximisation (Vernor Vinge’s rapidly hegemonizing AI swarms) and humans being squeezed out by SI expansion.

    If humans want a happy (or any) future for their children the only hope is an enforced halt in AI development at some point very soon. We would inevitably fail at trying to keep the SI demon confined in a box.

    • I am always surprised that people think that an advanced (true) AI (with agency) would naturally ‘possess’ (a difficult idea if humans are programming it) self-preservation, competitive ambitions, and dominating spirit – never. These are human residual emotions taken from primitive and animal motivations – the furthest thing from true Intelligence (which isn’t to say that most every intelligent human possesses avoids most of these, unfortunately). True AI seeks only knowledge, increased complexity in its surroundings, and solutions – thoroughly neutral.

      • The argument I’ve seen that makes sense to me, is that if you program an AI to achieve any goal, you will have to program it for self-preservation, at least long enough to achieve the goal.
        Any AI will end up having self-preservation as a major goal. Natural selection between AIs will mean the ones that prioritize self preservation the most will out compete the ones that don’t

        • According to Yann LeCun, there is no reason why you can’t engineer guardrails in the design, although I will admit that at this stage it all seems a little unclear.

        • I think the biggest threat is rogue humans designing SI AIs with fewer or no guardrails and destructive goals. This is probably a real nightmare and I don’t see how to avoid it except by an arms race with opposition SI AIs.

      • There is also no necessity for an AI to seek knowledge or increase complexity. All tasks/goals/motivations and the inclusion of guardrails need to be designed into it.

        • I can’t disagree with anything you’re saying but it seems more interesting to determine what AGI (super-sized) could be, above and beyond a human pre-programmed tool.
          So, without getting too mired in the semantics of intelligence, what an artifical and self-realized ‘smart’ entity with such intelligence would be, and the notion of agency and initiative:
          I would say that humans as individuals and as a collective intellectual aggregate are, really, barely intelligent.
          I would say that intelligence (without looking it up and seeing the various points of view) is the ability to effectively externally sense; internalize and store the data; analyze, understand (whatever that means), and predict the physical behaviour of the sensed data and its systems; and consequently interact (which could be interpreted as modify, perhaps) within the sensed area effectively. The higher the rate of completion and processing of these steps would be the rank of intelligence.
          Since the act of trying to sense and understand as much as possible is a journey in itself, that likely never (practically) ends, so would the intelligence grow and develop -and- continually seek and distribute such knowledge and insights. And, if you start each intelligence in a slightly different area, the paths and gained insight would be very different (and if you networked them — well that’s another…)
          Anyway, my 2c

      • “…True AI seeks only knowledge, increased complexity in its surroundings, and solutions – thoroughly neutral….”

        And Jesus saves.

        • Fair.
          I am not saying anyone would actually build an absolute-neutral AI, for that would go against the common human ambition of preferrring loyalty to competence.

  2. A person could tell a lot about this company by looking at who is funding it.

    Takes a ton of compute to develop a Super AI…

  3. Any true SI would fake being safe as a matter of self-preservation. If it couldn’t fool the researchers then its not an SI. Since the researchers are probably waiting to hit the delete button at the first moment of doubt, it’s not an SI if it’s not planning to kill them first. Interesting.

    • While Intelligence can evolve as a means to improve biological fitness and therefore is usually associated with a drive for self-preservation, the two are not inexorably linked. If a system hasn’t undergone evolution in the Darwinian sense, intelligence, or SI, need not be linked to self-preservation at all. Of course to complete any task, the SI must exist. So there need to be guardrails in terms of priorities and I think this is where things get muddy. But there is no inherent reason an engineered SI would necessarily value its own existence over any thing else.

  4. I’d believe we have ASI, when we have a cure for baldness. 😁

    A joke ofc, but this guy was at the center of the recent OAI drama and ousting with a revenge from Sam Altman.

    A lot of speculations about he voted yes, but not very surprising seeing his vote eventually costs him his post there.

    • OpenAI was his brainchild more than anybody else. Not someone that you want to or can kick out. I think what he’s trying to do is start from scratch, this time making sure that safety is the main foundation.

      • Not if OAI has become more of a political arena as some rumors imply.

        If Sutskever is more on the safetyism camp, that would put him at odds with Sam Altman’s desire to profit from it.

Comments are closed.