Billionaire and X (formerly Twitter) owner Elon Musk has predicted that AI will probably be smarter than individual humans by next year. The billionaire was responding to a clip of a discussion between podcaster Joe Rogan and ‘futurist’ Ray Kurzweil on when AI will reach human-level intelligence.
The discussion involved Kurzweil telling Rogan that human-level artificial intelligence will become a reality by 2029. He said, “We’re not quite there, but we will be there, and by 2029 it will match any person. I’m actually considered conservative. People think that will happen next year or the year after.”
Reacting to the clip of the discussion on X, Musk wrote, “AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.”
What is AGI?
AGI or Artificial General Intelligence has become a buzzword among tech leaders worldwide these days, following the rise of artificial intelligence systems such as ChatGPT and Gemini. However, there’s still no agreed definition of the term, and it’s generally accepted that it’s a stage at which the AI model gains enough skills to perform any task that humans can do with equal or better proficiency.
Even among technology leaders, there is wide disagreement about when or even if AGI will become a reality, and whether it will lead to potential harm or benefit to the human race. Let’s take a look at what different tech leaders think about AGI.
Meta’s Yann LeCun:
In an interaction with Time Magazine earlier this year, Meta Chief AI scientist Yann LeCun said that the current LLMs powering AI chatbots are not on a path toward AGI. He said, “It’s astonishing how [LLMs] work if you train them at scale, but it’s very limited. We see today that those systems hallucinate, they don’t really understand the real world. They require enormous amounts of data to reach a level of intelligence that is not that great in the end. And they can’t really reason. They can’t plan anything other than things they’ve been trained on. So they’re not a road towards what people call “AGI.” I hate the term. They’re useful, there’s no question. But they are not a path towards human-level intelligence.”
Sundar Pichai on AGI:
In an interaction with the New York Times, Google CEO Sundar Pichai had rejected the debate around AGI while saying that the current systems are going to be ‘very very capable’ in the future. He said, “When is it A.G.I.? What is it? How do you define it? When do we get here? All those are good questions. But to me, it almost doesn’t matter because it is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you reached A.G.I. or not; you’re going to have systems that are capable of delivering benefits at a scale we’ve never seen before, and potentially causing real harm. Can we have an A. I system which can cause disinformation at scale? Yes. Is it A.G.I.? It really doesn’t matter.”
Sam Altman on AGI:
OpenAI founder and CEO Sam Altman has been amongst the prominent voices on the potential benefits that AGI could cause to humanity. Speaking to Time Magazine last year, he said, “I think AGI will be the most powerful technology humanity has yet invented. If you think about the cost of intelligence and the equality of intelligence, the cost falling, the quality increasing by a lot, and what people can do with that. It’s a very different world. It’s the world that sci-fi has promised us for a long time—and for the first time, I think we could start to see what that’s gonna look like.”
The article originally appeared on Livemint.