1. Home
  2. Blog
  3. Technology
  4. Is Artificial General Intelligence in Our Future?

Is Artificial General Intelligence in Our Future?

Machines with human-like intelligence could pose a huge threat. But is it possible for it to exist in the near or distant future?

Luis Paiva

By Luis Paiva

SVP of People, Technology, and Operations Luis Paiva helps manage and lead teams across BairesDev to implement the best industry practices possible.

10 min read

Featured image

The possibility of machines and technology possessing human-like intelligence has been a pervasive idea for centuries. Often, it’s seen as a terrifying thing — just take the countless science-fiction movies, such as Blade Runner, which depict this phenomenon as one that will bring about a dystopia. 

Once, this seemed like a very distant possibility. But as artificial intelligence (AI) gains sophistication, laypeople and experts alike wonder if this could actually happen. Elon Musk, for example, has pointed to AI as one of the greatest threats to humankind.

Artificial general intelligence (AGI) is a term used to describe incredibly advanced AI, the kind that refers to technology having intelligence at a human level. So — could it actually happen? Here’s what the implications are and the evidence for and against its ultimate existence.

What is AGI?

The AI we have now may seem very intelligent. And the innovations it has fueled are impressive. From voice assistants like Alexa and Siri to self-driving vehicles, the technology has already led to countless inventions that have changed the very way we work and lead our lives.

One of the greatest advancements in the field of AI is that of neural networks. These algorithms make machine learning possible, committing tasks to memory every time they complete a process or tasks and improving their responses over time.

But AI as it stands now is still considered weak AI. That doesn’t mean it’s unsophisticated — it simply means it has very specific applications and can perform certain tasks. There’s no risk of machines going rogue. But if AGI were possible, that could all change.

With AGI, a machine could not only behave like a human, learning and improving, but it could also think like a living, breathing being as well. It would actually possess intelligence, rather than being programmed to learn and interact. It would feel, perceive, observe, and more. What’s more, it could learn and adapt much more quickly than a human being could. 

If this sounds frightening to you, you’re not alone. Having machines that are self-aware and can think and act for themselves is something society has never actually experienced. This is the strongest AI we can imagine — leagues stronger than the narrow AI we have right now.

So, Is It Actually Possible?

Perspective #1: Yes, It’s Possible — in Fact, It’s Coming Sooner Rather than Later

Many experts, including computer scientists and professors, predict that AGI is in the near future. Louis Rosenberg, Patrick Winston, Ray Kurzweil, and Jürgen Schmidhuber estimate the date of arrival at the mid-21st century, with the earliest prediction less than 10 years away.

There are several arguments these and other researchers and experts make to support the claim. For one, they point to the fact that human intelligence as a concept is “fixed,” while machine intelligence is improving and expanding. This, after all, is the very idea of machine learning: a segment of AI where technology learns and gains intelligence as it is repeatedly exposed to concepts and detects patterns.

As of yet, we haven’t seen a limit to what machines can learn and do. This is clear since technology becomes more and more intelligent as society advances.

Perspective #2: No, It’s Not Possible

But other experts disagree. 

Georgia Institute of Technology’s Matthew O’Brien said, “We simply do not know how to make a general adaptable intelligence, and it’s unclear how much more progress is needed to get to that point.”

Meanwhile, Roman Yampolskiy of Louisville University argues that AI simply can’t be both self-acting and under the control of humans. As it stands now, the technology is powered by humans, and there is no indication that we could allow it to operate completely independently, without first initiating its response.

Perspective #3: It’s Possible, But It Won’t Resemble Human Intelligence

Another argument doesn’t dispute the possibility of AGI but sees it as a different phenomenon from human intelligence. According to this perspective, AGI will not be superior to human intelligence. Much like animal intelligence, machine intelligence will have separate abilities.

This perspective also maintains that AGI isn’t something to be feared and could lead to greater and more productive innovation, with machines being able to solve complex problems that humans can’t. Of course, we’ve already seen early implications of this in fields like healthcare, where AI is used as a diagnostic tool, and more.

Perspective #4: It’s Possible, But It Won’t Be Realized Until the Distant Future

“It is a fraught time understanding the true promise and dangers of AI,” said Rodney Brooks, an MIT roboticist and co-founder of iRobot. “Most of what we read in the headlines…is, I believe, completely off the mark.” Brooks, for one, doesn’t foresee AGI being a realistic possibility until at least 2300.

Several other experts agree, saying that while it could be a reality at some point, AGI is by no means anywhere near becoming an actual reality in our lifetimes.

What Does It All Mean for Technology — and the Future of Humankind?

It’s hard to argue that AI has improved the lives of many, if not most, people and businesses around the world. How many people rely on their voice assistants? How many lives have been — and will continue to be — saved by AI diagnostic tools? How much fraud has been detected thanks to FinTech-AI innovations?

But that doesn’t mean AGI, which we, of course, haven’t yet realized, couldn’t do real damage to our world. Which perspective on the possibility of AGI’s existence is correct, whether it’s in the near or distant future, remains to be seen. The threat it could pose is, as of now, unclear as well.

Does that mean we shouldn’t keep innovating as a society? Of course not. If one thing is obvious, it’s that technology benefits us far more than it harms us.

Luis Paiva

By Luis Paiva

Luis Paiva helps lead BairesDev's Delivery, Tech, Client Services, PeopleX, and Executive Assistant departments as SVP of People, Technology, and Operations. Working with Operation, PMO, and Staffing teams, Luis helps implement the industry best practices for clients and their projects.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Contact BairesDev
By continuing to use this site, you agree to our cookie policy.