BairesDev
  1. Blog
  2. Technology
  3. Yes, You Can Trust AI. Here Is Why
Technology

Yes, You Can Trust AI. Here Is Why

No, AI isn’t here to kill us or to steal our lunch money; AI is a powerful tool as long as we use it responsibly.

Dustin Dolatowski

By Dustin Dolatowski

VP of Client Engagement Dustin Dolatowski leads high-performing teams to engage new client relationships and create customer success initiatives.

15 min read

Featured image

AI is the talk of the town, isn’t it? My social media is filled with people talking about all the things that we can and can’t do with AI, and most of them aren’t even part of the tech industry. While I’m a big proponent of AI and one of the people who believe that this is going to have a massive impact on both the economy and society, I kind of expected the buzz would have died down by now.

There is no doubt that AI has become a part of public consciousness or that AI development services are increasingly demanded in all industries. If you’ve been following the news, you know that the Writers Guild of America went on strike against the Hollywood studios. And what’s one of their demands? Protection against the possible adoption of AI and the consequences it would have for Hollywood writers. Even actors are worried that soon AI could reproduce their likenesses to such an extent that their services would no longer be required.

For every AI proponent there is a retractor, and to be quite frank, I’m more than a little frustrated with all the “this or that AI genius is speaking against AI.” Now, I don’t think this is just groundless fear-mongering. After all, Oppenheimer understood very well the dangers of what he and his team were doing when they developed the atomic bomb. So, too, many smart people who have spent their lives working with AI have shown deep concerns about the future if we don’t start taking this technology seriously.

Oppenheimer, and the bright physicist of his time, tried to understand how the world works at its fundamental level (atoms); their theories and experiments served as the basis for the atomic bomb. These scientists believed in progress and in science, and while some of them understood the consequences of their actions, the powers that be weren’t as aware of the ethical implications.

And that’s the point many of these AI thinkers are trying to make, it’s not that we shouldn’t trust AI. It’s that we should meditate on it’s impact instead of mindlessly brute forcing our way to more powerful models without understand the inner workings and the social impact it might have down the line.

My frustration comes from the fact that the media latches onto the distrust toward AI and shines the worst possible light on the subject. I’m sorry, but AI has been around a lot longer than your interest in the matter. And while we’ve always thought that this technology could be dangerous (because, let’s be real, any technology in the wrong hands can be), there are a lot of misconceptions and fake news making the rounds that are muddying the waters.

So that’s what we are going to talk about today—why AI isn’t Skynet and why it can be trusted.

No, Artificial Intelligence Will Not Take Over Our Work 

While at first glance it might seem silly that famous writers and artists are worried about their work in the age of AI, it’s not as if their concerns are unfounded. Geeks everywhere already experienced the rather shameful attempt of G/O media to “get on with the times” and have an AI model writing articles for their blog Gizmodo.

Just one small issue, the article was supposed to list every Star Wars product in chronological order—easy enough for a writer, but for large language models? It put things out of order, and it forgot some of the most famous entries in the series. And since GPT’s cutout is 2021, it couldn’t even talk about entries from the last couple of years—kind of lackluster for a blog that’s all about tech and sci-fi. Now, a connoisseur might argue, But what if we build an agent capable of searching the web before writing the articles?

Yeah, that could very well work in theory, but not if Glorbo has anything to say on the matter. Long story short, geeks being geeks suspected that a publication was generating AI content based on a subreddit for a very famous game. So geeks did what they do best and got together create fake posts about the all mighty Glorbo, and lo and behold, an article was published on the subject in record time.

Once again, a human being could have spotted the hoax, but a generative model doesn’t have the chops just yet to understand sarcasm or to tell which words make no sense in the context of certain aesthetics of writing styles.

And we should be alarmed right? I mean, thousands in IT are losing their jobs thanks to AI. Not so fast, let’s put things into perspective: These rounds of layoffs predate the rise in popularity of AI by quite a margin. We’ve been hearing about hiring freezes and downsizing from big tech, and the reason is simple: we are living in a post-pandemic world. Not only did the world economy take a huge hit, but many of these businesses that scaled to meet demands are now overstaffed.

In other words, yes, AI may have been a factor in the human decision making process, but let’s not kid ourselves. This isn’t happening in a vacuum. Companies are looking for ways to cut costs because we are all seeing the writing on the wall. 2023 is a rough year, and most businesses are trying to survive.

For example, while IBM initially mentioned that it would stop hiring for jobs that they thought could be handled by AI, they later added that, in the long term, it was likely that AI would create more jobs than destroy them. Why? Because AI empowers people, as long as they use it for what it’s designed for.

One look at what happened with Gizmodo should tell you everything you need to know. On one hand, we have someone high up in the chain that believed the buzz surrounding AI and how good it is, so they thought, “We could save some money by having some AI writers.” Except that’s the wrong way of thinking about it.

Let’s flip the story for a second. Imagine that we have a Star Wars fan who knows more about the world and the stories built by George Lucas than anyone else in the world—someone who is a savant of the galaxy far far away. Wouldn’t you like that person to write your article about the subject? Of course.

But let’s say that for whatever reason they aren’t a good writer. They don’t know how to get their ideas across, or maybe they are way too technical, or maybe they don’t like writing at all. The editor would have to spend a frustrating amount of time polishing the article before it sees the light of day.

So, we have an AI with the writing skills and the world’s biggest fan of the franchise. Mix them together and you could have one of the best writers for all things Star Wars, and with a good editor behind the duo, you have a winning team. On their own, this person would’ve had trouble making their career as a writer or as a content creator, but that changed thanks to AI.

Now, maybe this new writer is neurodivergent, or English isn’t their first language, or they have ADHD. Whatever may be the case, with the right implementation, AI is going to broaden their scope and open so many new possibilities for people who’ve been marginalized. Do you see where I’m going with this? AI is not our enemy or our rival, but it’s our colleague, and people stand to gain a lot by incorporating it into their workflows.

Yes, certain jobs will disappear, but at the same time so many venues will open up, as long as we understand the market trends and we make an effort to adapt.

No, AI Systems Will Not Destroy The World

Just the other day I heard an expert on the radio talk about Sci-fi authors and how they always manage to hit the mark with their assumptions. At a surface level, the argument makes perfect sense. For example, while not as elaborate, the first small-sized cell phone bore a striking resemblance to Star Trek’s tricorder, and nowadays we have smartphones capable of running medical diagnosis.

Unfortunately, this is a bad mixture of accessibility and confirmation bias. We simply take what’s more popular, or what reinforces our previous knowledge as evidence of our ideas. Modern and popular sci-fi often takes a nihilistic view of the future (in stark contrast to the optimistic futurism of 1950s Sci-Fi).

For every evil robot, like Mr. Smith from The Matrix or Skynet from The Terminator, there is a good one, like Data from Star Trek: The Next Generation or the benign computers from Ian Banks’ Culture series. There are literally thousands of examples of AI and machines acting in concert with humans to usher in a better life.

As I’ve been giving talks about AI, people have often quoted a study where over 40% of AI experts believe that there is a 10% chance of AI being a catastrophic technology for humanity and have asked me what my thoughts are on the subject. First, I would really like to know how that number is calculated, because if it’s an estimate, human beings are really bad at guessing odds without training.

For example, did you know that there are over a thousand different reasons why a plane could have an accident that kills all passengers on board? Based on that information alone, take a guess at the odds of a plane crashing—1 in 1.2 million, and even if you are one of the unlucky few, the chances of a plane accident being fatal is 1 in 11 million. In comparison, there is a 1 in 5,000 chance of having a car accident.

Of course, you probably already knew this, because all it takes is for you to see the news every day to know just how rare it is to see news about a plane crash, while the odds are that you’ve been in traffic delays due to a car crash. The thing is, we simply do not have enough information about AI to make educated guesses on what could trigger an apocalyptic scenario.

Here is another piece of trivia that might let you rest easily at night. Many of these experts are actually thinking about Artificial General Intelligence (AGI) when they talk about the risks of AI. While I have no doubt in my heart of hearts that someday we will see AGIs, that’s not going to be happening anytime soon.

For example, one of the most powerful generative models on the market, GPT-4, has been one headache after another for OpenAI. Not only are they still limiting paying users with 50 messages to the model every 3 hours (as of July 2023), but a recent study has shown that the model has also become less intelligent, being unable to solve simple arithmetic problems and writing code with more bugs than before.

One running theory is that GPT-4 was a brute-forced model. In other words, OpenAI just expanded the size of the model and threw raw power at the issue, and the end result was an extremely powerful product that requires way too much juice to work. If that is the case—and keep in mind that this is a rather tame model that only does natural language processing—can you imagine just how much power we are going to need to make AI that is capable of doing everything a human can?

Don’t get me wrong, there are ways around this issue, but not in the short term.

To summarize, current AI is highly specialized. We have AI that’s great at playing chess, diagnosing diseases, translating languages, and many other specific tasks. However, we are nowhere near creating an AI that can do all of these things and more, which is what is typically envisioned when talking about a catastrophic AI scenario.

However, the fact that AI is not going to become catastrophic overnight doesn’t mean we shouldn’t be careful. As AI develops, there are real concerns that need to be addressed, including issues related to privacy, security, and the concentration of power. It’s also essential to establish ethical guidelines for the development and use of AI and to consider how to handle potentially harmful situations, like an AI system that learns to achieve its goals in harmful or disruptive ways.

If we handle these challenges properly, trustworthy AI has the potential to be an incredibly beneficial technology. It could help us solve some of the world’s most complex problems, improve our quality of everyday life, and open up new opportunities for discovery and innovation.

However, managing the risks associated with AI and ensuring it benefits all of humanity requires international cooperation, thoughtful regulation, and ongoing research. We can’t afford to be complacent or reckless, but with the right approach, we can create a future where AI is a positive force rather than a threat.

Yes, AI Lies and Hallucinates, But Who Doesn’t?

Yes, it’s true, as we’ve touched on previously, AI can “lie” and “AI hallucinates.” These are anthropomorphic terms used to describe some of the errors that AI can make. For instance, “lying” refers to how AI can generate inaccurate or misleading information. “Hallucinating” refers to how AI can misinterpret input or fill in missing information based on its training, often leading to bizarre or incorrect outputs.

Consider a generative model like GPT-4, which is trained to predict and generate human-like text based on what it has been fed. The model doesn’t understand the content it’s generating or have any knowledge of the world outside its training data. Therefore, it can occasionally generate text that sounds plausible but is completely false or nonsensical.

Similarly, an AI trained to recognize images can sometimes “see” objects or patterns that aren’t actually there, a phenomenon sometimes referred to as “hallucinating.” This happens because the AI has learned to associate certain patterns in the data with certain labels, and it sometimes applies these associations inappropriately.

Another example is predictive models that try to make predictions based on a data set that is too different from the training dataset. If you think AI can be trained on millions of data points about the calorie counts of hamburgers, don’t expect it to accurately predict how many calories are in an apple pie.

AI models hallucinate because they lack the understanding of the world that humans have. Humans have a common sense understanding of the world, born out of our daily life experiences. AI, on the other hand, lacks this intuition. It merely learns patterns from the data it was trained on and applies these patterns to new data.

Do you know how humans like to say that we shouldn’t mix apples and oranges? To an AI, that doesn’t make sense. If they take a vector of numbers, they will spit out a result, all the time, regardless of where those numbers came from.

These issues are well-known among AI researchers and are an active area of study. Researchers are designing algorithms to make AI more reliable and less prone to such errors. For instance, they’re exploring ways to ensure AI’s outputs can be explained and justified. They’re also working on methods for detecting and correcting when AI “hallucinates.”

Let’s also not forget that even humans lie and hallucinate. We’re all prone to cognitive biases, errors in judgment, and false beliefs. However, unlike humans, AI doesn’t have malicious intent when it “lies” or “hallucinates.” It’s simply doing what it’s been trained to do, based on the patterns it’s learned.

Building trust in autonomous systems is essential for their successful integration into society and respect for fundamental rights.

In conclusion, while AI can “lie” and “hallucinate,” it doesn’t do so with intention. AI’s “errors” are an extension of its training and its inability to comprehend the world as a human does. However, as researchers continue to improve these models, we can expect AI to become more reliable, more accurate, and less prone to such errors.

We just need to remember that AI is a tool—a very powerful one—and like all tools, it must be used wisely and responsibly within its scope. Autonomous systems need to be treated with caution, and understanding their limitations is crucial in making decisions about their implementation and use.

If you enjoyed this, be sure to check out our other AI articles.

Dustin Dolatowski

By Dustin Dolatowski

As VP of Client Engagement at BairesDev, Dustin Dolatowski is responsible for building and leading high-performing teams, successfully engaging new client relationships, and creating customer success initiatives that drive the customer experience lifecycle.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

how to become an android developer
Technology

By BairesDev Editorial Team

15 min read

Technology - Sanity Testing: Keeping
Technology

By BairesDev Editorial Team

11 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.