That's what I hate about current AI hype, tech-bros, Musk and AI wankers - they are keep suggesting we're on a verge of AGI, when we're not.
Current AI is still a hammer - no matter how many time you will swing it, no matter how many nails it hit, it won't become sentient.
Current AI is just a math hammer with tamagochi features (and text to code interface in form of understanding the human language in raw form).
You can add a microphone to it and hammer can say "me hungry, me sad" and do conversations, but this is basically still the same as AI in video game reacting to you based on pre-set conditions.
However, the way how Large Language models do the context recognition between words might actually be nice approximation or imitation how human mind do, so it is a small step towards understating what General Intelligence even is.
We're still far away from Skynet or Cylons.
In the end, nature will become the limiter itself - we think human mind is flawed, but it's acutally super optimized by millions of years of evolution to be flexible and universal...on very tight energetic budget (Still, human brain take 25% of all body energy which is huge comparing to brain mass to body ratio).
Super-intelligence, mean super energy requirements - you will still need some tradeoffs, like specialization and narrowing down the scope to bypass it, but you sacrifice the general flexibility...so if you need that you need to sacrifice super-intelligence and go to basic template...human level of intelligence and flexibility.
Eventually we might end up with Skynet...chained to big a** fusion reactor...rigged with hundreds of failsafes, breakers, explosives, EMP bombs, couple of nukes and a dude with glasses and crowbar





