Monday, February 20, 2023

Do AIs dream of electric explainability?

One of the primary arguments against artificial intelligence in many processes related to decision-making is lack of explainability.

Explainability (also known as interpretability) refers to being able to explain how a machine learning AI model functions to produce a given output in a way that “makes sense” to a human being at an acceptable level.

Think of how in maths classes at school where you may have been asked to 'show your working' - to write out the steps you took to get from the initial problem to your solution.

For learning AIs, those that are trained on massive datasets to reach a level of capability, explainability can be highly challenging.

Even when the initial AI algorithms used for the machine learning and the data set used to train it are made fully explainable, the internal method by which the AI goes about deriving a solution may not be fully explained and hence the AI doesn't meet the explainability test.

When presented with identical inputs for a decision process, different AIs, using the same initial algorithms and trained on the same training dataset, might form very different conclusions.

Now this may appear similar to how humans make decisions. 

Give two humans the same information and decision process and, at times, they may arrive at completely different decisions. This might be due to influences from their past experiences (training), emotions, interpretations, or other factors.

When humans make decisions, it is possible to ask them how they arrived at their decision, and how they weighed various factors. And they may be able to honestly tell you.

With AIs it is also possible to ask them how they arrived at a decision.

However, the process by which they use to respond to this request is the same as they used to arrive at their decision in the first place. The AI model is not self-conscious and as such there's no capability for self-reflection or objective consideration.

In fact, most machine learning models only have an attention span of only a few thousand words, So, they may not even recall making a decision a few minutes or days before. AI doesn't have the consciousness to be aware that 'they' as an entity made the decision. 

This is unlike a human, who might forget a decision they made, but be conscious they made it and able to 'think back' to when they did to provide reasons for their decision-making.

Asking an AI to explain a decision is not necessarily providing explainability for that decision. What you are getting is the machine learning model's probabilistic choices of letters and words. These may form what may seem to be a plausible reason, but isn't a reason at all.

You can even simply tell a machine learning AI that it made a given decision and ask it why it did, and it will write something plausible that justifies that decision.

At a basic level I can easily explain how a machine learning AI, such as ChatGPT or Jurassic, arrives at a given output. It takes the input, parses it through a huge probability engine then write an output by selecting probabilistically likely words. 

For variability it doesn't always select the highest probability every time, which is why the same input doesn't always result in the same output.

However this doesn't explain how an AI makes a 'decision' - AKA prefers one specific option over other options. It does explain wht the same AI, asked the same 'question' (input), may produce diametrically opposed decisions when asked to regenerate its response.

The AI isn't interested in whether a decision is 'better' or 'worse' - simply that it provides an output that satisfies the end user.

There's a Chinese proverb that describes this perfectly:
“A bird does not sing because it has an answer. It sings because it has a song.”
This is why no current machine learning models can be explainable in their decision-making. And why we should not use them in situations where they are making decisions.

Now if you wish to use them as a way to provide information to assist decision-making, or to help write up the decision once it has been made, they have enormous utility.

But if you want explainability in decision making, don't use a machine learning AI.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Bookmark and Share