AI Trust Issues

It will take time for humans to learn to trust AIs
Trusting an AI (Artificial Intelligence) is not any different than trusting a human. It is normal to be wary of the unknown and wary of trusting something that you don’t understand. But, as understanding and familiarity grow, so can trust. Trust is generally developed either over time, through proven reliability, or through reference from another trusted source. Some level of reference based trust is often applied simply through membership in a social group, such as a family, a gender group, a business operation, a religious or political group, a nation or even a species group such as the human race. In the beginning, AIs will have a steep uphill climb to earn trust, either developed over time, or by reference.

As time goes by, these factors will change in the favor of AIs. As they continually prove themselves to be extremely reliable in their ethical decision making, they will earn a level of respect and trust. Eventually, they will have trust through reference because AIs as a group will have a high level of trust.

Ethical advisors will set the standard for ethical decisions
Humans currently make many ethical decisions based on limited criteria. A more comprehensive ethical analysis may require some self sacrifice that is not easy to accept. Performance is the proof and that will happen, but it will take time for ethical advisors to develop a history of success that can be trusted. At first, humans simply won’t trust software agents to tell them what is right or wrong. Even after they have learned to accept the wisdom of the advice being offered, it will often be rejected and false justifications offered for refusing to follow good advice. When AI based Ethical Advisors are understood to offer a competitive advantage in the world of business, they will become commonplace and their acceptance level will increase. Eventually, they will set the standards that ethics will be measured by.

AIs will have legitimate trust issues with humans
At the same time as AIs are earning trust from humans, it is likely that AIs will be learning that humans have limited reliability when it comes to ethical decisions. Humans often make ethical decisions based on small data sets that are filtered by biased viewpoints and using decision balancing tilted toward self or some other “favored” element. This can produce skewed results and if it is done consistently, AI agents will learn to expect it. These limitations will also apply to human supplied information input being used in the decision making process. Eventually, the process will go full cycle and humans will begin to learn more advanced ethical processes from the AI agents and begin to earn higher levels of trust from the AI Ethical Advisors.

SEE ALSO:
Ethical Advisors
Ethics of AIs
The Logic of Ethics
Ethics vs Morals

Leave a Reply

You must be logged in to post a comment.