How New Architectures Will Bring a New World

Share on facebook
Share on twitter
Share on linkedin

Get the best of
The Daily Bundle in your inbox every week

New world architecture

Get the best of The Daily Bundle in your inbox every week

If someone were to tell you that you are basically using the same computer people used back in 1945, you’d turn their back on them and leave. Why listen to such nonsense? Yet, here I am, telling you precisely that. Before you close the tab and go somewhere else, let me tell you why I’m saying this. 

I’m not oblivious to the fact that 1945 computers were massive machines with little processing power that look nothing like 2020 smartwatches with far superior capabilities. However, at their core, those computers from 75 years ago are very close to your iPhone. How can it be? Because of one thing – architecture or, basically, how computers are built.

The main architecture used by everyone from smartphone makers to coffee machine manufacturers is the one described by John Von Neumann in 1945. It comprises a set of crucial components, mainly a processing unit, a volatile memory, and a list of instructions for the machine to follow. 

Though that seems like a simplistic model for explaining modern computing, I assure you they are the basis of some of the most advanced devices in existence. If you stop to think about it, those components exist in the phone or laptop with which you’re reading this in the form of RAM memory, hard drives, CPUs, and apps (which are nothing more than packages of instructions written through programming languages).

The dominance of the Von Neumann architecture is fairly simple to explain through 2 reasons. The first is that using it in hardware is pretty easy, thanks to the use of systems that are already implemented in transistors, drives, and whatnot. The second one is that the entire process that powers the Von Neumann architecture can be described mathematically. So, just by learning math, you can understand how a given input generates the exact same output every time.

 

Beyond Von Neumann

Now, if Von Neumann architecture is so popular, so many people already know how to work with it, and it is fairly easy to understand – why would anyone try to retire it? That’s a fair question you have there. Mainly, the modern world is pushing us towards more computational power and more accessibility, both aspects that present the major drawbacks of this architecture.

First, there’s a problem called the Von Neumann bottleneck, which is a data transfer limitation that reduces the performance in machines that need to process large amounts of data quickly (you know, like a lot of modern computers do).

And then, there’s the need to write instructions for the Von Neumann architecture, which typically involves high-level programming languages that are, at their core, mostly mathematical. It’s not that learning to code with a specific language is impossible but writing certain processes in that language can be hard, especially for more complex ones. That’s not all. Some people have some sort of rejection of these logical workings, which drives away talented and innovative thinkers that could help in dealing with those very same processes.

Up until now, no one was really bothered by those drawbacks. After all, we were able to go from gigantic machines to the tiny supercomputers you use in your wrist. Why would we need more? Mainly because more powerful computers and machines could aid us in ways that we never thought possible. By bypassing the limitations of the Von Neumann architecture, we would be boosting our capabilities to understand and solve many more problems.

Are you in disbelief about that last claim? You shouldn’t be, as a new emerging architecture is already proving how right this actually is. I’m talking about Deep Learning. 

 

Computers That Think Like Humans

Let’s recap what we’ve seen so far, shall we? There’s a mainstream architecture that’s based on 3 components: a processor, a memory, and a list of instructions. Though it had some limitations from the get-go, we have been using it for three-quarters of a century to power our modern computing revolution with a lot of success. Yet, to take the next step, we need to bypass those limitations. 

That’s precisely what Deep Learning is promising.

It bases its workings on neural networks, which are complex systems that mimic how a human brain works. In other words, these systems take massive data sets as input, process them through a series of connections, and provides valuable and highly accurate outputs. The beauty of it all is that you don’t actually have to instruct the machine using deep learning on how they have to process the data – the system itself makes its own connections and offers insights.

Hard to grasp? Think about it like this. When you’re a little child, adults teach you about the world by repeating words, pointing at things, showing you pictures – basically, they teach you by example. When you learn, you can go about in your life knowing basic stuff like what’s a TV and some abstract concepts, like math or freedom. 

 

A Deep Learning system learns in the same way. You use a training data set to teach it something (like what are music melodies, or male faces, or market fluctuations) and explain what those examples represent. After the model is refined, the Deep Learning system is capable of abstracting what it has learned and applying it to new data with high accuracy.

In some sense, they share the same basic process that machines using Von Neumann architecture. You give them input and they’ll give you a relevant output. However, Deep Learning machines are different in that they don’t offer insights based on predefined rules. In fact, after studying the examples, these machines can make connections and get a level of understanding about those examples that’s so deep no human could ever reach it. 

Where Von Neumann machines use math and logic, Deep Learning uses connections that a lot of times can’t be explained mathematically. Any modern use of deep learning (from facial recognition to business algorithms) can offer amazing results, but you’d be hard-pressed to find a mathematical explanation as to why it suggests a specific business strategy or makes a correlation between a person and the face on its database. 

As such, the architecture brought by Deep Learning means we’re leaving Von Neumann aside to favor more sophisticated ways of computing. By “creating artificial human brains”, we’re opening the possibilities to better understand complex issues in the fields of healthcare, climate change, social politics, marketing, agriculture, and so many more. The shift is both inexorable and desirable.

 

The Quantum Dream

Deep Learning isn’t the only architecture making splashes. It’s true that it’s the most developed but there’s another alternative that has been the dream of computer experts for a long time – quantum computers. 

It all began to look clearer last year when Google announced they had achieved quantum supremacy with their quantum computer. In other words, the tech giant claimed they had a computer so advanced that it was able to solve a problem no other computer had solved before. Disputes about the claim aside, quantum computing seems to be at a hand’s reach, so many are taking notes.

Why? Because the use of quantum physics in computers can certainly change the computer field but also our entire perception of reality. In quantum computers, the systems get rid of the classic binary system where data is represented by a combination of 1s and 0s (where each unit is either 1 or 0). Instead, they use the concept known as quantum superposition, in which the system can exist in different states at the same time. In other words, the data could be 1s and 0s at the same time.

Without getting too much into it, quantum computers are an entirely different kind of beast. It would be impossible for any human or for any Von Neumann-based computer to solve the problems these computers are able to solve. Think of things like simulating the behavior of atoms and particles at unusual conditions,  the increase in cybersecurity through more advanced cryptographic algorithms, or the development of new potent drugs for curing illnesses. 

The possibility of a quantum computer becoming a standard is somewhat far from us. But as more and more companies keep developing them, it wouldn’t be totally insane to think that we’ll see one of them in the coming decade or so. With it, the leap we’d take in our computer processing capabilities would be unheard of and would fundamentally change our life. 

That’s because, rather than adding more and more Von Neumann-based machines to use their combined power, we would be given a major push towards understanding things that have baffled us for centuries.

 

New Components for a New World

After 75 years of rule by the Von Neumann machines, a lot of what you’ve read here might seem impossible or out of a sci-fi movie. Well, you’d better believe it. Though we aren’t seeing that much of a change right now, the effects of the new architectures will start to pop out everywhere. In fact, Deep Learning is already behind some of today’s most revolutionary products, like smart speakers and self-driving cars.

Even if those don’t feel particularly revolutionary (which, come on, they totally are), the coming years will show how much the architectural disruption we’re seeing in tech today will impact our daily lives. Buckle up and be prepared, because the new components and approaches for the new world are already here, and they promise that we’ll be leaving behind the same old computers, once and for all.

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp
Share on email

Get in Touch

Access the Top 1% IT Talent, leverage our expertise to
jump-start your business.​

If you previously need to sign a non-disclosure agreement, please email us at info@bairesdev.com.
Scroll to Top

By continuing to use this site, you agree to our cookie policy.