One of the issues that arises when people are discussing the use of artificial intelligence[1] (AI) is how to ensure that decisions based on AI are ethical. It's a valid concern.

"While AI is by no means human, by no means can we treat it like just a program," said Michael Biltz, managing director of Accenture Technology Vision at consulting firm Accenture. "In fact, creating AIs should be viewed more like raising a child than programming an application. That's because AI has grown to the point where it can have just as much influence as the people using it."

Also: What is AI? Everything you need to know about Artificial Intelligence[2]

Employees at companies are not only trained to do a specific job; they're also expected to understand polices around diversity and privacy[3], for example. "AIs need to be trained and 'raised' in much the same way, to not only perform a task but to act as a responsible co-worker and representative of the company."

AI systems are making decisions in a variety of industries today -- or will be doing so in the near future -- that could have an impact on virtually everything they touch. "But the reality is that we don't yet have the standards in place to govern what's acceptable and what's not, or to outline what a company is responsible or liable for as a result of [AI-based] decisions," Biltz said.

Autonomous vehicles[4] provide an example. "They're sure to be involved in accidents that cause damage or injury, just like human drivers today," Biltz said. "The difference is that we have a clear understanding for defining fault and blame for human drivers, and that doesn't yet exist for

Read more from our friends at ZDNet