BairesDev
  1. Blog
  2. Technology
  3. How Should AI Be Regulated?
Technology

How Should AI Be Regulated?

Random companies adopting inconsistent guidelines may not be enough to ensure that AI development doesn’t put innovation or profits ahead of human rights and needs.

Nate Dow

By Nate Dow

Solutions Architect Nate Dow helps BairesDev teams provide the highest quality of software delivery and products with creative business solutions.

6 min read

Featured image

Artificial intelligence (AI) is a technology with the potential to contribute to incredible gains in a variety of fields, such as medicine, education, and environmental health. But it also includes the potential for many types of misuse, including discrimination, bias, changing the role of human responsibility, and other ethical considerations. That’s why many experts are calling for the development of responsible AI rules and laws.

Some companies have developed their own set of AI principles. For Microsoft, they are Fairness, Reliability & Safety, Privacy & Security, Inclusiveness, Transparency, and Accountability. The following video explains Microsoft’s take on responsible AI:

However, random companies adopting inconsistent guidelines may not be enough to ensure that AI development doesn’t put innovation or profits ahead of human rights and needs. Yet, who should determine the rules for everyone to follow? Whose values will those rules reflect? And what should the rules be? These are weighty issues that can’t be fully examined here. But below we offer an introduction to some of the important issues and take a look at what’s already being done.

Recognition of Responsible AI

Responsible AI means different things to different people. Various interpretations highlight transparency, responsibility, and accountability or following laws, regulations, and customer and organizational values.

Another take on it is avoiding the use of biased data or algorithms and assuring that automated decisions are explainable. The concept of explainability is especially important. According to IBM, explainable artificial intelligence (XAI) is “a set of processes and methods that allow human users to comprehend and trust the results and output created by machine learning algorithms.”

Because of these different meanings, entities that produce rules and guidelines for the use of AI must carefully define what they hope to achieve. Even after making that determination, these entities must think through the complex set of issues involved in establishing the rules. They must consider questions like:

  • Should ethics standards be built into AI systems?
  • If so, what set of values should they reflect?
  • Who decides which set will be used?
  • How should developers resolve differences between multiple sets of values?
  • How can regulators and others determine whether the system reflects the stated values?

Further thought must be given to considerations related to the data that AI systems use and the potential for bias. For example:

  • Who is collecting the data?
  • Which data will be collected, and which intentionally not collected?
  • Who is labeling the data, and what method are they using to do so?
  • How does the cost of data collection impact which data is used?
  • What systems are used to oversee the process and identify bias?

The EU Leads

In 2018, the European Union (EU) passed measures guaranteeing online service users some control over their own personal technology data. The most well known is the General Data Protection Regulation (GDPR). The EU is again leading the way in ensuring ethical use of AI, which can generate algorithms that handle very personal information, such as health or financial status.

Some efforts in the EU are being met with resistance. According to Brookings, “The European Union’s proposed artificial intelligence (AI) regulation, released on April 21, is a direct challenge to Silicon Valley’s common view that law should leave emerging technology alone. The proposal sets out a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses, and lightly regulates less risky AI systems.”

The regulation includes guidance on how to manage data and data governance, documentation and record-keeping, transparency and provision of information to users, human oversight, and robustness, accuracy, and security. However, the regulation focuses more on AI systems and less on the companies that develop them. Still, the guidelines are an important step toward creating worldwide AI standards.

Other Initiatives

In addition to the EU, many other entities are developing regulations and standards. Here are just a few examples.

  • IEEE. The Institute of Electrical and Electronics Engineers (IEEE) has published a paper titled Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. It addresses issues like human rights, responsibility, accountability, transparency, and minimizing the risks of misuse.
  • OECD. The Organisation for Economic Co-operation and Development (ECD) has established principles on AI that focus on benefits to people and respect the rule of law, human rights, and democratic values. They also embrace transparency, security, and accountability.
  • WEF. The World Economic Forum (WEF) has developed a white paper titled AI Governance: A Holistic Approach to Implement Ethics into AI. Its introduction states, “The goal [of this White Paper] is to outline approaches to determine an AI governance regime that fosters the benefits of AI while considering the relevant risks that arise from the use of AI and autonomous systems.”

In the U.S., the Department of Defense has adopted a set of ethical principles for the use of AI. They include five major areas, stating that the use of AI must be responsible, equitable, traceable, reliable, and governable.

Governments and other entities may also consider alternatives and complements to regulation, such as standards, advisory panels, ethics officers, assessment lists, education and training, and requests for self-monitoring.

What About Compliance?

Another consideration in this discussion is, “Even if governments and other entities create ethical AI rules and laws, will companies cooperate?” According to a recent Reworked article, “10 years from today, it is unlikely that ethical AI design will be widely adopted.” The article goes on to explain that business and policy leaders, researchers, and activists are worried that the evolution of AI instead “will continue to be primarily focused on optimizing profits and social control.”

However, those leaders and others concerned about this issue should continue to define what ethical AI looks like and create rules and guidelines to help others adopt those principles. While the mechanism of company values will determine how much they will embrace those concepts, the mechanism of consumer choice will determine whether the ones that don’t stay afloat.

If you enjoyed this, be sure to check out our other AI articles.

Tags:
Nate Dow

By Nate Dow

As a Solutions Architect, Nate Dow helps BairesDev provide the highest quality software delivery and products by overcoming technical challenges and defining internal teams. His creative approaches help solve clients' business problems with technology.

Stay up to dateBusiness, technology, and innovation insights.Written by experts. Delivered weekly.

Related articles

Technology - Sanity Testing: Keeping
Technology

By BairesDev Editorial Team

11 min read

Technology - Top Tools for
Technology

By BairesDev Editorial Team

15 min read

Contact BairesDev
By continuing to use this site, you agree to our cookie policy and privacy policy.