Friday, February 24, 2023

To observe future use of AI, watch Ukraine

I'm not given to regularly commenting on global events, such as major wars, through this blog. However, given the active use of advanced technology by both Ukraine and Russia in the current war, I am making a brief exception to comment on the use of advanced technologies, including AI.

The Russian invasion of Ukraine has seen some of the most advanced uses of technology on the battlefield in history. Some of this involves AI, some other technologies, but all of it demonstrates the practical potential of these technologies and should be considered within any nation's future defense planning.

Now drones have been used in previous conflicts in Afghanistan and elsewhere, with militia forces modifying off-the-shelf products into aerial scouts and IEDs. Meanwhile US forces have used custom-built human-controlled Predators for over ten years to pinpoint target enemy commanders and units.

However, the war in Ukraine represents the largest deployment and the most versatile use of drones to-date by regular armies opposing each other in the field.

For example, both Ukraine and Russia have successfully used autonomous watercraft in assaults. These unmanned above and below water drones have been used to inflict damage on opposing manned vessels and plant and clear mines at a fraction of the cost of building, crewing and maintaining manned vessels.

The Ukrainian attack on Russia's Black Sea Fleet in port exhibits signs of being one of the first uses of a drone swarm in combat, with a group of kamikaze above and below water drones, potentially with aerial drone support, used in concert to damage several Russian ships.

While perhaps not as effective as was hoped, this is a prototype for future naval drone use and heralds that we are entering an era where swarms of relatively cheap and disposable drones - partially or completely autonomous to prevent signal blocking or hijacking - are used alongside or instead of manned naval vessels to inflict damage and deter an attacker, acting as a force magnifier.

The use of autonomous mine layers offers the potential for 'lay and forget' minefields astride enemy naval routes and ports, to again limit and deter enemy naval movements. Again the lower cost and disposability of these drones, compared to training explosive handling human divers and building, maintaining, defending and crewing manned naval minelaying and removal vessels, makes them a desirable alternative.

We've also seen extensive use of aerial drones by both sides in the conflict to spot and target enemy combatants, allowing more targeted use of artillery and manpower and helping reduce casualties by lifting aspects of the fog of war. Knowing the numbers and strength of your opponent on the battlefield greatly enhances a unit's capability to successfully defend or assault a position.

While many of these drones are human controlled, there's been use of AI for situations where direct control is blocked by enemy jamming or hacking attempts. AI has also been used for 'patrol routes' and to 'return' home once a drone has completed a mission - complete with ensuring that drones fly an elusive course to conceal the location of their human controllers.

The Ukrainian war has even seen the first public video recorded aerial drone on drone conflict, with a Ukrainian drone ramming a Russian drone to knock it out of the sky and remove the enemy's best source of intelligence.

These examples of drone use aren't totally new to warfare. 

Hundreds of years ago unmanned fire ships were used in naval combat - loaded with explosives and left to drift into enemy warships and explode. 

Similarly, early aerial combat by humans, in World War One, saw pilots take pistols and bricks aloft to fire at enemy planes or drop on enemy troops below. Just as drones today are being modified to carry grenades to drop on infantry positions or used to ram opposing drones.

The war will help advance the technology and improve the tactics. If nothing else Ukraine is a test ground for learning how to effectively use drones within combined forces to improve overall military effectiveness and reduce casualties.

And artificial intelligence is becoming increasingly important as a control alternative when an enemy blocks signal or attempts to take control of an army's drone assets.

We need to put these learnings to use in our own military planning and acquisitions, so that Australia's military becomes capable of fighting the next war, rather than the last.

Read full post...

Monday, February 20, 2023

Do AIs dream of electric explainability?

One of the primary arguments against artificial intelligence in many processes related to decision-making is lack of explainability.

Explainability (also known as interpretability) refers to being able to explain how a machine learning AI model functions to produce a given output in a way that “makes sense” to a human being at an acceptable level.

Think of how in maths classes at school where you may have been asked to 'show your working' - to write out the steps you took to get from the initial problem to your solution.

For learning AIs, those that are trained on massive datasets to reach a level of capability, explainability can be highly challenging.

Even when the initial AI algorithms used for the machine learning and the data set used to train it are made fully explainable, the internal method by which the AI goes about deriving a solution may not be fully explained and hence the AI doesn't meet the explainability test.

When presented with identical inputs for a decision process, different AIs, using the same initial algorithms and trained on the same training dataset, might form very different conclusions.

Now this may appear similar to how humans make decisions. 

Give two humans the same information and decision process and, at times, they may arrive at completely different decisions. This might be due to influences from their past experiences (training), emotions, interpretations, or other factors.

When humans make decisions, it is possible to ask them how they arrived at their decision, and how they weighed various factors. And they may be able to honestly tell you.

With AIs it is also possible to ask them how they arrived at a decision.

However, the process by which they use to respond to this request is the same as they used to arrive at their decision in the first place. The AI model is not self-conscious and as such there's no capability for self-reflection or objective consideration.

In fact, most machine learning models only have an attention span of only a few thousand words, So, they may not even recall making a decision a few minutes or days before. AI doesn't have the consciousness to be aware that 'they' as an entity made the decision. 

This is unlike a human, who might forget a decision they made, but be conscious they made it and able to 'think back' to when they did to provide reasons for their decision-making.

Asking an AI to explain a decision is not necessarily providing explainability for that decision. What you are getting is the machine learning model's probabilistic choices of letters and words. These may form what may seem to be a plausible reason, but isn't a reason at all.

You can even simply tell a machine learning AI that it made a given decision and ask it why it did, and it will write something plausible that justifies that decision.

At a basic level I can easily explain how a machine learning AI, such as ChatGPT or Jurassic, arrives at a given output. It takes the input, parses it through a huge probability engine then write an output by selecting probabilistically likely words. 

For variability it doesn't always select the highest probability every time, which is why the same input doesn't always result in the same output.

However this doesn't explain how an AI makes a 'decision' - AKA prefers one specific option over other options. It does explain wht the same AI, asked the same 'question' (input), may produce diametrically opposed decisions when asked to regenerate its response.

The AI isn't interested in whether a decision is 'better' or 'worse' - simply that it provides an output that satisfies the end user.

There's a Chinese proverb that describes this perfectly:
“A bird does not sing because it has an answer. It sings because it has a song.”
This is why no current machine learning models can be explainable in their decision-making. And why we should not use them in situations where they are making decisions.

Now if you wish to use them as a way to provide information to assist decision-making, or to help write up the decision once it has been made, they have enormous utility.

But if you want explainability in decision making, don't use a machine learning AI.

Read full post...

Bookmark and Share