One of the primary arguments against artificial intelligence in many processes related to decision-making is lack of explainability.
Explainability (also known as interpretability) refers to being able to explain how a machine learning AI model functions to produce a given output in a way that “makes sense” to a human being at an acceptable level.
Think of how in maths classes at school where you may have been asked to 'show your working' - to write out the steps you took to get from the initial problem to your solution.
For learning AIs, those that are trained on massive datasets to reach a level of capability, explainability can be highly challenging.
Even when the initial AI algorithms used for the machine learning and the data set used to train it are made fully explainable, the internal method by which the AI goes about deriving a solution may not be fully explained and hence the AI doesn't meet the explainability test.
When presented with identical inputs for a decision process, different AIs, using the same initial algorithms and trained on the same training dataset, might form very different conclusions.
Now this may appear similar to how humans make decisions.
Give two humans the same information and decision process and, at times, they may arrive at completely different decisions. This might be due to influences from their past experiences (training), emotions, interpretations, or other factors.
When humans make decisions, it is possible to ask them how they arrived at their decision, and how they weighed various factors. And they may be able to honestly tell you.
With AIs it is also possible to ask them how they arrived at a decision.
However, the process by which they use to respond to this request is the same as they used to arrive at their decision in the first place. The AI model is not self-conscious and as such there's no capability for self-reflection or objective consideration.
In fact, most machine learning models only have an attention span of only a few thousand words, So, they may not even recall making a decision a few minutes or days before. AI doesn't have the consciousness to be aware that 'they' as an entity made the decision.
This is unlike a human, who might forget a decision they made, but be conscious they made it and able to 'think back' to when they did to provide reasons for their decision-making.
Asking an AI to explain a decision is not necessarily providing explainability for that decision. What you are getting is the machine learning model's probabilistic choices of letters and words. These may form what may seem to be a plausible reason, but isn't a reason at all.
You can even simply tell a machine learning AI that it made a given decision and ask it why it did, and it will write something plausible that justifies that decision.
There's a Chinese proverb that describes this perfectly:
“A bird does not sing because it has an answer. It sings because it has a song.”
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.