BairesDev
  1. Blog
  2. Innovation
  3. The Ethics of AI in Software Development
Innovation

The Ethics of AI in Software Development

Artificial Intelligence has brought many opportunities and challenges in software development. However, most people forget the ethical implications of it.

Jeff Moore

By Jeff Moore

Senior Engagement Manager Jeff Moore strives to develop, maintain, and expand relationships across BairesDev while focusing on business development.

9 min read

Human and artificial intelligence robot hands holding holographic data display

Since Isaac Asimov introduced his 3 Laws of Robotics in his 1942 short story Runaround, there has been a concern for the ethical dilemma posed by artificial intelligence. Though he was “only” a science fiction writer, Asimov already understood the underlying issue with developing sentient machines, capable of deciding things on their own.

As visionary as he was, Asimov saw the possibility of a future where we would develop robots to help us with our everyday tasks – or even replace us in doing them altogether. And while the 3 Laws have been expanded, questioned and disputed, the basic intent behind it is as valid as it ever was. 

Basically, Asimov was proposing something admirable – that the development of artificial intelligence in every stage, including the software development stage, should include an ethical component as an essential part. Well, the future Asimov partly saw in his stories is nowhere and, as AI is becoming increasingly complex and common, considering that basic intent has become a topic of hot discussion.

Roboethics and machine ethics

Given that the development of AI brings forward the possibility to create machines that can think for themselves, the number of ethical issues that might derive from said possibility is high. However, we can summarize them in two groups: one concerning the development of “safe machines” incapable of hurting humans and other morally relevant beings, and a second one about the moral status of the machines themselves.

This distinction has divided the ethics of AI into 2 separate bodies of study. First, there’s roboethics, which deal with the people developing the machines. It studies how humans design, build, use and treat AI that might end up being smarter than their own creators.

And then there’s machine ethics, which is focused on the behavior of said machines. Machines ethics understand AI-powered machines as artificial moral agents (AMAs) capable of discerning between complex scenarios and potential plans of action that consider multiple factors.

Dividing AI ethics into these 2 fields seeks to cover (albeit in a broad sense) almost all aspects of dealing with intelligent machines and their relationship with humans. Both of them contemplate the opinions and suggestions from the multiple actors that converge in this important topic – from philosophers and academics to business owners and lawmakers. They are currently discussing the implications of working with sentient machines and how we should act in a context where they are becoming more and more common.

What are the ethical implications of AI?

The use of artificial intelligence, especially when coupled with machine learning, implies the existence of several ethical implications for the developers working on AI-based software,  companies, and society as a whole. 

For developers, AI means they have to go beyond the technical aspect of the solutions they work on and balance what they need from their applications with the potential impact of said applications. A design flaw, an unchecked algorithm or the overlook of a feature with ambiguous use might end up providing disastrous results.

What Microsoft did with its Tay chatbot in 2016 illustrates this perfectly. The app, meant to interact with Twitter users, self-taught itself new things with each new interaction. However, Twitter’s peculiar user base used a flaw in Tay’s algorithm to load it with racist and offensive ideas. In under a day, the chatbot was supporting genocide and became a negationist. From a technical standpoint, Tay was working as intended but ethically it was a failure.

Google had a similar catastrophic experience with facial recognition technology in Photos back in 2015. The algorithm labeled African-American people as gorillas, and developers had to take anything gorilla-related from the app as a result of the backlash. Surely it wasn’t the developers’ intention for that to happen but a misconception and poor implementation can lead to this kind of results.

The companies themselves also have their own part to play in AI ethics. In general, businesses pursue more financially oriented goals, which means that they are more prone to use solutions that save them money. AI is perfect for that, as AI-powered software can work with huge chunks of data, process them and offer results in the form of suggestions, strategies to follow and even deciding things on their own.

All of that saves a ton of time and money for businesses, as humans doing exactly the same would take considerable more resources to offer the same kind of results. Yet, just considering the AI issue from the cost-cutting aspect is ethically troubling. On the one hand, there’s the limitation of AI software to consider the implications of its own suggestions and strategies. 

On the other hand, there’s the human impact it can bring across industries, especially the impact on a workforce that might be pushed out of the market by the increasing automation.

What’s more, today’s context has companies holding all the power of AI ethics. Since they are the ones developing the impressive number of AI-powered apps, businesses are the ones deciding their ethical boundaries. And even the rising trend of ethics boards within those companies (from big tech firms like Google and Facebook to brand new startups) can’t seem to address that issue, as the power is still gripped firmly by private hands that think on stakeholders first and the society in second place.

That turns the third interested party in this issue into an insanely important actor of the whole equation. Society (understood as the end users of those applications as well as the public institutions) has to take an interest in this development and actively participate in it. Since the vast majority of everyday people will “suffer” from the advancement of AI, its voice doesn’t just have to be heard but also taken into account going into the future of artificial intelligence.

This can take two forms. On one hand, it’s highly important for the government and public institutions to offer the necessary regulations to ensure that the ethical issue is a deciding factor on AI development. Since they are the guardians of public interest, governments have to take an active stance to keep in check privately held interests.

Also, the general public has to dive into AI ethics as well. In an era when we grew accustomed to handing over sensitive information without an afterthought, we as users have to be informed about all AI-related aspects. How does it work? What happens to our data? Who can access it? And what do these machines do with it? 

Combining all of these into a reasonable discussion that contemplates the potential implications of AI software is very much needed to prevent something like facial recognition from going from a harmless “this is a photo of a white woman smiling” to an orwellian mass surveillance scenario.

The challenges that come with the AI era

The challenge is clear. Instead of asking if we can do certain things with AI (which, in the light of recent developments, is the only thing that matters for developers and companies) we should be asking ourselves if we should do it – and if so, how should we do it.

This change in focus implies the development of laws, regulations, and principles for AI’s use in business. The main goal of this ethical framework should be to limit the risks of ethical issues arising from improper uses of AI technologies. For that to happen, there’s a number of things to consider, according to a panel of experts convened by the European Union:

  • Human control: AI shouldn’t be seen as a replacement for human autonomy, nor limit it. All systems should be oversought by humans that should be the ones ultimately deciding whether the decisions made by software are “right or wrong”.
  • Robust security: since AI works with sensitive data and makes its decision based on it, all systems should be extremely secure and accurate. This means they have to be strong in the face of external attacks and reliable on their decision-making process.
  • Private data: the security impacts the collected data since it should ensure that all information that’s gathered is private and stays that way. 
  • Transparency: AI systems, even the more complex ones, should be easily understandable by any human. Companies using them should explain how the AI software works and makes its decisions and make it crystal clear for the end-users to understand.
  • Diverse and unbiased: all AI systems should be available for all humankind regardless of age, gender, race, or any other characteristic. Additionally, none of those characteristics should be used to bias the results and decisions made by the AI.
  • Societal well-being: AI systems should pursue any goal as long as they enhance positive social change. The expert panel stressed the need for all of them to be sustainable, meaning that AI solutions should be ecologically responsible as a core aspect of that social change.
  • Accountability: everything related to the AI’s actions should be auditable. The idea is to ensure the negative impact of these systems is kept to a minimum. Additionally, this also means that any negative impact that might appear should be reported in due time.

Rising to such a challenge won’t be easy. As governments struggle to keep the pace of AI’s extremely dynamic development and companies hold onto their power over those advancements, the proposed core values seem like nice guidelines to start with – but today this feels more like a utopia than a reality.

In that sense, they feel closer to the over simplistic Asimov rules than to a ripe framework for ethics in the AI era. However, they share the writer’s intent of controlling the development of artificial intelligence so it doesn’t end up being just a tool for profit and control but rather an advancement for all people. 

The path ahead promises to be rough and ask of us to stay vigilant and active to ensure that artificial intelligence results in positive change for everybody and not just the dystopian future some are warning us of.

If you enjoyed this, be sure to check out our other AI articles.

Tags:
Jeff Moore

By Jeff Moore

As Senior Engagement Manager, Jeff Moore helps develop, maintain, and expand relationships with customers, partners, and employees at BairesDev. He focuses on business development, account management, and strategic sales consulting with a proactive approach.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Innovation - The Future of
Innovation

By BairesDev Editorial Team

4 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.