Bill Gates Says Superintelligence is Inevitable

Bill Gates talks about the inevitability that AI will become more intelligent than humans. Bill Gates has insider access and insight into what OpenAI and Microsoft are doing in AI.

He believes the next level is to get to human-like metacognition. We need to go beyond the more trivial reasoning of LLM today. Metacognition is an awareness of one’s thought processes and an understanding of the patterns behind them.

There will be some metacognition by next year in AI systems.

Key moments:
00:05 🌐 Bill Gates discusses AI’s transformative potential in revolutionizing technology.
02:21 🧠 Superintelligence is inevitable and marks a significant advancement in AI technology.
09:23 πŸ“± Future AI may integrate deeply as cognitive assistants in personal and professional life.
14:04 πŸŽ“ AI’s metacognitive advancements could revolutionize problem-solving capabilities.
21:13 πŸ”„ AI’s next frontier lies in developing human-like metacognition for sophisticated problem-solving.
27:59 🧠 AI advancements empower both good and malicious intents, posing new security challenges.
28:57 🌍 Rapid AI development raises questions about controlling its global application.
33:31 πŸš€ Productivity enhancements from AI can significantly improve efficiency across industries.
35:49 πŸ’¬ AI’s future applications in consumer and industrial sectors are subjects of ongoing experimentation.
46:10 🌐 AI democratization could level the economic playing field, enhancing service quality and reducing costs.
51:46 πŸ€– AI plays a role in mitigating misinformation and bridging societal divides through enhanced understanding.

13 thoughts on “Bill Gates Says Superintelligence is Inevitable”

  1. Know what? I think my pro-biology prejudice has blinded me to intelligence and self awareness in forms I am not familiar, or more important comfortable with. When, if ever will digital technology become self aware, an individual? Perhaps the day some robot says “I don’t like that” When asked why, it says “I just don’t” No rational, logical reason, just it’s opinion. Perhaps that’s one definition of a person: An entity that can be pissed off, about anything or nothing, at any one moment. Interesting way to identify an intelligent lifeform: “We know it’s not happy, why? We have no idea”. That would be an interesting, if unproductive webinar.

    I think that’s one way we are hindered in finding ET. We think a technological species will communicate like we do, say with radio. We will hear the “noise” another society will “leak out, give off”, etc. But this is nothing more then a massive leap of faith. 200 years ago, what was the fastest way to send a message, other then a guy riding a horse, with a note in his pocket? Smoke signals, or fires on towers that breached the horizon. (Think Great Wall of China). That worked quite well, for a long time. If you were in sight of the Great Wall. And of course, knew what the signals said, assuming you knew they were signals at all.

    Another problem with radio: It fades out, dissolves over distance. The most powerful radio signals, that go between galaxies and beyond, are (far as we know) natural. Their loud as hell but (again, far as we know), don’t “say” anything. Pity, their REALLY loud. 200 years from now will we have a more effective way to communicate across great distance? God, I hope so, but honestly, I have no idea. Then another problem (yeah there’s more) is what if ET is not a technological species, but still, really smart?

    This is kind of a default question. How do you detect another sentient species who don’t make stuff we can detect? We may not know what signals we detect, but we may notice their artificial. Nothing says we WOULD notice any signal as artificial. This is a HUGE if. One way we could learn to talk to a non-technological very intelligent species is try to communicate with dolphins and whales. They have been around millions of years longer then we have.

    They started in the water, moved to land, got board or couldn’t get a break in the rent, and went back to the water. But about 20 years ago, we discovered something amazing about their ecolocation. It’s far, far more then what we think of as sonar. It’s encrypted language. Like trying to break any code I’d try to identify something I understand with what they must understand. My first guess would be fish. Which both species cooperate in complex, I would say strategic ways to heard, and scarf them up. Seeing them hunt as a group, at a 3D level is tactical genius.

    Another problem we would have with breaking their, or ET’s language code, is our perception of reality is utterly different from whales and dolphins. I can not even imagine how different it is from ET. Sea creatures have no concept of shelter or clothing. To us, it’s a fundamental concept. Ever see a whale in a suit, or a dolphin wearing a hat? If you have you need to cut back on the drugs. Ever hope to talk to ET? We should start by trying to talk with beings who live in a different reality, right here at home.

  2. It’s depressing that Sam Altman says venture capital is not available to fund things like OpenAI. Venture capital has become as risk-adverse as banks used to be, while banks won’t lend to any but the most established large companies now. Angel Investing is where venture capital used to be.
    This leave innovators to relying on friends, family, and credit cards with their usurious interest rates.
    We are leaving mountains of productive innovation on the table, for China, India, or anywhere else that still supports new risk ventures.
    I know. I tried to get two major ventures funded, and the initial amounts would have been small, in the 7-8 figure range, with high equity and potentially enormous ROI in the case of the first project at least. But aside from a single local partner, no one was willing to take the risk even after 100s of presentation pages, 40+ spreadsheets, a video, media coverage and endorsements. No one doubted the profit potential of the innovative building (the first project), they just thought I’d never get permission, which is more-or-less what happened. But if there had been more financial and political support? Maybe we would have. There must be 1000s of such cases.

  3. The issue may be energy , how much energy we need the same intelligence than a human , we are still a very efficient biological computer

  4. Obviously that a VERY vague term, but I think AGI is 1-3 years away.
    I’d say 1-2 years after that, we will have ASI, and it will simply get more “Super” every year after that.

    • Hey dude, my original question still stands. Vague or not, what IS superintelligence? Definitions please…In detail.

  5. I’d still love for someone to define what “super intelligence” is. This is still not clear to me (OK, call me slow). Is it super insight, wisdom, or previously unknown ideas that give us that AH HA! moment. (Those later are so cool when we feel them, they were when I did. I wish that later feeling on every one. With all my heart)

    Now will someone please define, in detail what the hell superintelligence is?

    • It probably won’t be true intelligence, but it will be the ability to make the best choice of all choices based on the objectives of the AI.

      For example, let us say that you want to buy a car. In the AI system, you type in everything you want for that car and it chooses from amongst all the cars available to it the car that has the best compromises towards the car you want. It won’t rely just on key words and just eliminate cars that don’t have your key words. It will actually be able to choose from the qualities that you want as well as be able to weigh between the qualities for the best option.

    • I assume it means performance on intellectual tasks that’s beyond the normal range of human variation.

      To some extent AI already manages this, just by having access to a larger database of information than any human could assimilate in one lifetime. That impressive capability is rendered less impressive by current LLM AIs’ tendency to hallucinate information that looks like actual data, but is false. Suppose they fixed that? Then a medical AI could always respond to queries based on the entire corpus of medical information, while a human doctor who spent all their time reading medical journals couldn’t keep up. That would be pretty big.

      On math, computers already exceed humans in terms of speed and reliability in performing defined mathematical operations; This was, after all, their first application. This is starting to extend into symbolic math and logic, where computers are beginning to generate logical proofs that are outside human capabilities on account of being too complex for humans to generate.

      Humans, of course, have some hardwired limits in terms of the number of things that we can consider in one operation, exceeding this requires breaking the operation up. In principle this limit doesn’t have to exist for an AI, which can simultaneously take into account more factors than a human can keep in mind at the same time.

      So, if you could combine these into one system that,
      1) Had access to more information than a human could accumulate in a lifetime,
      2) Instantly did math and logic, no matter how complex, without ever making errors,
      3) Could take into account more simultaneous considerations than a human was capable of,
      4) and added some sort of capacity for originality, maybe just by randomly generating concepts then rigorously testing them.

      You’d definitely have something that was intellectually beyond human limits, and was capable of accomplishments beyond what humans could achieve.

  6. The problem is notsnotscientific, its social.
    Agree that AGI & ASI will happen in near future.
    This will have significant consequences to existing social structure.
    Unfortunately, most human are not equipped to handle this level of change.
    Consequence, is major social unrest / conflict. Where it ends up will depend on whether WW3 occurs or not.

  7. Some people still think it is 50 years off. But some think it will happen next year. The truth is probably in the middle, that is 10 to 15 years, which is very soon in a way.

  8. I see much discussion on how to determine whether something is AGI or if it is Super-Intelligent. Also discussions on if it is safe or how it can be made safe. There is another philosophic term that seems important but is not brought up. Wisdom – I can see where an entity can be super intelligent, but can one be super wise? I.E. can it give humanity advice that we know we SHOULD follow? That is the best for humanity, AI and the planet to follow? How could we put Wisdom into a quantifiable term? What tests could we perform? Is wisdom an Absolute term or does it depend on point of view?

    • Yeah, you ask the same questions I’ve had for many decades. Back in the 1980’s, I started a company that developed optical imaging systems, that was the basis of early, and secure biometric technology. For the record, I have a PhD in biochemistry, a masters in academic medicine (the kind of credential you get if you go into teaching or research, not clinical practice, which I did), and a BA in History. When I started my first company, all but one of the guys I worked with, (it was all guys then) including my companies co-founder, was a computer programmer. The one other guy was an electrical engineer. He felt lonely, as did I. In our own company.

      My co-founder, and the other computer guys were of the opinion that if a computer became “fast enough”, it would at some unknown magical point, become “self-aware”. I did, and still do have a problem with this POV. I think a really fast digital computer, is well, a really fast digital computer. Yes, it can give the impression of being self aware. IMO, that’s how AI works, That’s how CHAT works, it “feels like how a person would sound”. But is it an individual, what I believe you have to be to be self aware? Oh let me count the ways I doubt that. OK, I admit, having a biological mindset, I am prejudiced in favor of living systems over physically static IC’s.

      I’ve learned some of the latest circuit designers are incorporating (in some situations) with the help of organic and biological assemblies, templates in new circuit designs that physically repair (and by that nature, improve) said circuits. I’d love to look in the eyes of some of the guys I worked with so long ago and say, “Told you so”. But I’m not so petty. Am I? Even at my age, I don’t wish to be rude.

    • Wisdom as in β€œthe fixed and stable disposition of the intellect to see truths in reality as a whole as they relate to its first principle, the First Mover”? I don’t think we can with any finite system. If it is just advice that we know we should follow, then a regular spreadsheet will do; it gives me advice about my spending that I definitely should follow, if I want to remain solvent. There is a world of difference between the two.

Comments are closed.