BairesDev
  1. Blog
  2. Innovation
  3. At The Fringe of The Norm: The Limits of AI
Innovation

At The Fringe of The Norm: The Limits of AI

AI is a powerful technology that’s growing by the minute, but that doesn't mean that it’s infallible.

Nate Dow

By Nate Dow

Solutions Architect Nate Dow helps BairesDev teams provide the highest quality of software delivery and products with creative business solutions.

7 min read

Featured image

I’ll let you in on a little secret: as powerful as it is and as trendy as it has become, AI is far from perfect. At its core, AI is driven by statistics, and statistics have always faced extremely difficult challenges when trying to model realities. Some of those challenges can be tackled with inventiveness and machines. While others remain as relevant as they have always been.

Understanding some of these issues (and therefore some of the limitations of A.I systems) is important. Look, I’m as excited as the next person in regards to the future of Artificial Intelligence. But knowing where we stand is important, as it helps us make better decisions and better implementations.

Understanding the chinese room

John Searle, a well-known academic in the field of the philosophy of Artificial Intelligence, created a thought experiment called the Chinese Room. It started as a counterargument to the validity of the Turing Test but it later grew as one of the strong arguments against the idea that machines can think.

The experiment goes something like this: imagine that you have a person locked inside a room filled to the ceiling with books. There is just one problem, they are all in Mandarin, and our person doesn’t understand the language at all.

The room has two slots, every few hours both open. In one, we have a 3×5 card with a few characters written in Mandarin. On the other, a black ink pen. Our subject, growing bored with their accommodations, has perused some of the books, and recognizes some of the characters written on the card.

They don’t understand what the characters mean, but in all the books the characters are written next to another set of characters. Every single time. So they decide on intuition to write the corresponding characters on the card and place them in the second slot.

Much to their surprise, the second slot opens back and reveals a delicious meal. Our subject quickly realizes that they are being rewarded every time they pair the symbols correctly.

Searle then ponders the following question: if the writer of the cards doesn’t know what’s happening inside the room but they see the card and see that someone is replying to whatever they are writing, will they assume that the person inside the room speaks Mandarin?

For Searle, computers are akin to the person inside the room. They can be trained to see patterns and to make decisions based on those patterns, but they don’t really understand the patterns. Just like our subject doesn’t have to know what the symbols mean to reply to the messages.

Following Searle’s argument, a computer can clearly see that two variables are correlated, but it would be impossible for the computer to build a theory that explains why that correlation exists. That is an inherent limitation of machine learning.

Staying inside the box

Algorithms are at their best when the predictions they make are based on data similar to the one they were trained with. The bigger the difference between the input data and the training set the more likely the predictions are to fail.

Mathematical models are built on the relationship between sets of variables, for example, height and weight are correlated, so too are GDP and consumer spending. These relations however don’t necessarily remain constant.

In the aforementioned height/weight case, for instance, the relation tends to disappear the more the weight increases. That’s because people can keep gaining weight without growing taller.

If I build a model that predicts someone’s height based on their weight, I might have a reliable model as long as the person falls within “normal” parameters (keep that word in mind). The prediction error will increase with extreme weights, such as people who are extremely thin or obese.

Statistical “normality” is when a data set follows a similar pattern to a normal distribution. In broad terms, it means that around 75% of the data is very close to the median. Algorithms are at their best when the variables follow normal and predictable distributions. As such, data that falls outside the norm tends to yield unpredictable results.

Individual vs. groups

Human behaviors are extremely hard to predict on a personal basis. As we stated before, algorithms rely on training data to make predictions. While they are really good at predicting trends, it’s another matter entirely when we try to extrapolate to the individual.

This isn’t a problem with just machine learning, but with statistics in general. For example, we could say that on average men are physically stronger than women. That is correct, as long as we think in terms of trends, of global averages, of taking into account every human being alive (and we define strength as something concrete like weight lifting).

But on a personal level, things get muddier. A female MMA fighter or trainer will outright lift more weight than a man who has never done a single pushup in his life. So, does that mean that AI isn’t useful?

Of course not. When people make strategic decisions based on AI they cast a wide net. It’s not that the Amazon algorithm is targeting me or you specifically, it’s that it noticed that people that follow a certain pattern tend to prefer a certain kind of product.

There is no guarantee that subject A or B specifically will buy whatever the targeted Ad showed them, but they are more likely to buy it than a person who doesn’t share their interests. It’s a gamble, one where the company stacks the odds in their favor.

Susceptibility to outliers

Another problem is outliers. Data that is extreme and doesn’t represent the norm. On one side, models trained on normal data tend to underperform when they try to make predictions based on outliers.

The other issue is when we train an AI with outliers. If the data is a byproduct of momentary phenomena, what’s going to happen is that when things return to normal the algorithm will become less reliable.

The COVID pandemic is a perfect example. Supply Chains models were simply unable to predict what was going to happen when the world shut down for a few months. We didn’t have the data to train models adapted to these circumstances.

So, the solution would be to gather new data and retrain the model with information from 2020 and 2021 right? Well, not quite. As we reopen the borders and supply chains start moving again, the models built with that data will start to underperform.

Once again, that’s not to say that there isn’t merit in recording and using this data. It’s just that we have to understand that AI isn’t some magical intelligence that can discern the specifics of a situation, or intuitively make an assessment.

Looking towards the future

If it feels like I’m criticizing AI, nothing could be farther from the truth. The field is growing tremendously, and I have no doubt in my mind that we are approaching a golden age of AI research.

Having said that, we have to understand that AI isn’t perfect and that so far it’s not a replacement for human ingenuity. We are at our best when machines and humans work together. One processing massive amounts of data, the other giving shape to the results.

If you enjoyed this, be sure to check out our other AI articles.

Tags:
Nate Dow

By Nate Dow

As a Solutions Architect, Nate Dow helps BairesDev provide the highest quality software delivery and products by overcoming technical challenges and defining internal teams. His creative approaches help solve clients' business problems with technology.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Innovation - The Future of
Innovation

By BairesDev Editorial Team

4 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.