Friday, February 24, 2023

To observe future use of AI, watch Ukraine

I'm not given to regularly commenting on global events, such as major wars, through this blog. However, given the active use of advanced technology by both Ukraine and Russia in the current war, I am making a brief exception to comment on the use of advanced technologies, including AI.

The Russian invasion of Ukraine has seen some of the most advanced uses of technology on the battlefield in history. Some of this involves AI, some other technologies, but all of it demonstrates the practical potential of these technologies and should be considered within any nation's future defense planning.

Now drones have been used in previous conflicts in Afghanistan and elsewhere, with militia forces modifying off-the-shelf products into aerial scouts and IEDs. Meanwhile US forces have used custom-built human-controlled Predators for over ten years to pinpoint target enemy commanders and units.

However, the war in Ukraine represents the largest deployment and the most versatile use of drones to-date by regular armies opposing each other in the field.

For example, both Ukraine and Russia have successfully used autonomous watercraft in assaults. These unmanned above and below water drones have been used to inflict damage on opposing manned vessels and plant and clear mines at a fraction of the cost of building, crewing and maintaining manned vessels.

The Ukrainian attack on Russia's Black Sea Fleet in port exhibits signs of being one of the first uses of a drone swarm in combat, with a group of kamikaze above and below water drones, potentially with aerial drone support, used in concert to damage several Russian ships.

While perhaps not as effective as was hoped, this is a prototype for future naval drone use and heralds that we are entering an era where swarms of relatively cheap and disposable drones - partially or completely autonomous to prevent signal blocking or hijacking - are used alongside or instead of manned naval vessels to inflict damage and deter an attacker, acting as a force magnifier.

The use of autonomous mine layers offers the potential for 'lay and forget' minefields astride enemy naval routes and ports, to again limit and deter enemy naval movements. Again the lower cost and disposability of these drones, compared to training explosive handling human divers and building, maintaining, defending and crewing manned naval minelaying and removal vessels, makes them a desirable alternative.

We've also seen extensive use of aerial drones by both sides in the conflict to spot and target enemy combatants, allowing more targeted use of artillery and manpower and helping reduce casualties by lifting aspects of the fog of war. Knowing the numbers and strength of your opponent on the battlefield greatly enhances a unit's capability to successfully defend or assault a position.

While many of these drones are human controlled, there's been use of AI for situations where direct control is blocked by enemy jamming or hacking attempts. AI has also been used for 'patrol routes' and to 'return' home once a drone has completed a mission - complete with ensuring that drones fly an elusive course to conceal the location of their human controllers.

The Ukrainian war has even seen the first public video recorded aerial drone on drone conflict, with a Ukrainian drone ramming a Russian drone to knock it out of the sky and remove the enemy's best source of intelligence.

These examples of drone use aren't totally new to warfare. 

Hundreds of years ago unmanned fire ships were used in naval combat - loaded with explosives and left to drift into enemy warships and explode. 

Similarly, early aerial combat by humans, in World War One, saw pilots take pistols and bricks aloft to fire at enemy planes or drop on enemy troops below. Just as drones today are being modified to carry grenades to drop on infantry positions or used to ram opposing drones.

The war will help advance the technology and improve the tactics. If nothing else Ukraine is a test ground for learning how to effectively use drones within combined forces to improve overall military effectiveness and reduce casualties.

And artificial intelligence is becoming increasingly important as a control alternative when an enemy blocks signal or attempts to take control of an army's drone assets.

We need to put these learnings to use in our own military planning and acquisitions, so that Australia's military becomes capable of fighting the next war, rather than the last.

Read full post...

Monday, February 20, 2023

Do AIs dream of electric explainability?

One of the primary arguments against artificial intelligence in many processes related to decision-making is lack of explainability.

Explainability (also known as interpretability) refers to being able to explain how a machine learning AI model functions to produce a given output in a way that “makes sense” to a human being at an acceptable level.

Think of how in maths classes at school where you may have been asked to 'show your working' - to write out the steps you took to get from the initial problem to your solution.

For learning AIs, those that are trained on massive datasets to reach a level of capability, explainability can be highly challenging.

Even when the initial AI algorithms used for the machine learning and the data set used to train it are made fully explainable, the internal method by which the AI goes about deriving a solution may not be fully explained and hence the AI doesn't meet the explainability test.

When presented with identical inputs for a decision process, different AIs, using the same initial algorithms and trained on the same training dataset, might form very different conclusions.

Now this may appear similar to how humans make decisions. 

Give two humans the same information and decision process and, at times, they may arrive at completely different decisions. This might be due to influences from their past experiences (training), emotions, interpretations, or other factors.

When humans make decisions, it is possible to ask them how they arrived at their decision, and how they weighed various factors. And they may be able to honestly tell you.

With AIs it is also possible to ask them how they arrived at a decision.

However, the process by which they use to respond to this request is the same as they used to arrive at their decision in the first place. The AI model is not self-conscious and as such there's no capability for self-reflection or objective consideration.

In fact, most machine learning models only have an attention span of only a few thousand words, So, they may not even recall making a decision a few minutes or days before. AI doesn't have the consciousness to be aware that 'they' as an entity made the decision. 

This is unlike a human, who might forget a decision they made, but be conscious they made it and able to 'think back' to when they did to provide reasons for their decision-making.

Asking an AI to explain a decision is not necessarily providing explainability for that decision. What you are getting is the machine learning model's probabilistic choices of letters and words. These may form what may seem to be a plausible reason, but isn't a reason at all.

You can even simply tell a machine learning AI that it made a given decision and ask it why it did, and it will write something plausible that justifies that decision.

At a basic level I can easily explain how a machine learning AI, such as ChatGPT or Jurassic, arrives at a given output. It takes the input, parses it through a huge probability engine then write an output by selecting probabilistically likely words. 

For variability it doesn't always select the highest probability every time, which is why the same input doesn't always result in the same output.

However this doesn't explain how an AI makes a 'decision' - AKA prefers one specific option over other options. It does explain wht the same AI, asked the same 'question' (input), may produce diametrically opposed decisions when asked to regenerate its response.

The AI isn't interested in whether a decision is 'better' or 'worse' - simply that it provides an output that satisfies the end user.

There's a Chinese proverb that describes this perfectly:
“A bird does not sing because it has an answer. It sings because it has a song.”
This is why no current machine learning models can be explainable in their decision-making. And why we should not use them in situations where they are making decisions.

Now if you wish to use them as a way to provide information to assist decision-making, or to help write up the decision once it has been made, they have enormous utility.

But if you want explainability in decision making, don't use a machine learning AI.

Read full post...

Wednesday, February 15, 2023

DTA chooses a cautious path for generative AI

The DTA's CEO, Chris Fechner, has advised public servants to be cautious in their use of ChatGPT and other generative AI, as reported by InnovationAus.

This is an unsurprising but positive response. While it suggests public servants use caution, it doesn't close down experimentation and prototyping.

Given how recently generative AI became commercially useful and that most commercial generative AIs are currently based overseas (noting my company has rolled out local prototypes), there are significant confidentiality/security challenges with generative AI for government use, alongside the challenges of accuracy/factualness and quality assurance.

Given I have a public sector background, I began experimenting with these AIs for government use from October 2020. Within a few weeks I was pleasantly surprised at how well an AI such as GPT-3 could produce minutes and briefing papers from associated information, accurately adopting the necessary tone and approach that I had used and encountered during my years in the APS.

Subsequently I've used generative AI to develop simulated laws and policy documents, and to provide insightful advice based on regulations and laws.

This is just the tip of the iceberg for generative AI in government. 

I see potential to accelerate the production of significant amounts of internal correspondence, reports, strategies, intranet content and various project, product and user documentation using the assistance of AI.

There's also enormous potential to streamline the production and repurposing of externally focused content; turning reports into media releases, summaries and social posts; supporting engagement processes through the analysis of responses; development and repurposing of communications materials; and much more.

However, it's important to do this within the context of the public service - which means ensuring that the generative AIs used are appropriately trained and finetuned to the needs of an agency.

Also critical is recognising that generative AI, like digital, should not be controlled by IT teams. It is a business solution that requires skills that few IT teams possess. For example, finetuning and prompt engineering both require strong language capabilities and business knowledge to ensure that an AI is appropriately finetuned and prompted to deliver the outcomes required.

Unlike traditional computing, where applications can be programmed to select from a controlled set of options, or a white list used to exclude dangerous or inappropriate options, generative AIs must be trained and guided through this approach - more akin to parenting than programming.

I'm certain that the folks most likely experimenting with generative AI in government are more likely on the business end, than the IT end - as we saw with digital services several decades ago.

And I hope the public sector remembers the lessons from this period and the battles between business and IT are resolved faster and more smoothly than with digital.

Read full post...

Thursday, February 09, 2023

AI is not going to destroy humanity

 I've read a few pieces recently, one even quoting an Australian MP, Julian Hill, where claims are made of "catastrophic risks" from AI to humanity.

Some of the claims are that "ChatGPT diminishes the ability for humans to think critically", that "AI will eliminate white collar jobs" or even that "AI will destroy humanity".

Even in a session I ran yesterday for business owners about how to productively use ChatGPT in their business had several folks who evidenced concern and fear about how AI would impact society.

It's time to take a deep breath and reflect.

I recall similar sentiments at the dawn of the internet and even at the invention of the printing press. There were similarly many fearful articles and books published in 1999 ahead of the 'Y2K bug' that predicted planes would fall out of the sky and tax systems crash. Even the response of some commentators to the recent Chinese balloon over the US bears the same hallmarks of fear and doubt.

It's perfectly normal for many folks to feel concerned when something new comes along - one could even say it's biologically driven, designed to protect our nomadic ancestors from unknown threats as they traversed new lands.

Stoking these fears of a new technology heralding an unknown future are the stock-in-trade of sensationalists and attention seekers. Whereas providing calm and reasoned perspectives doesn't attract the same level of engagement.

Yes, new technology often heralds change and uncertainty. There's inevitably a transition period that occurs once a new technology becomes visible to the public and before it becomes an invisible part of the background.

I'd suggest that AI has existed as a future fear for many years for humanity. It is used by popular entertainment creators to denote the 'other' that we fear - a malevolent non-human intelligence that only wishes us harm. 

From Skynet to Ultron to M3gan, AI has been an easy plot device to provide an external threat for human protagonists (and occasionally 'good' AIs like Vision) to overcome. 

With the arrival of ChatGPT, and the wave of media attention to this improvement to OpenAI's GPT-3, AI stopped being a future fiction and become a present fear for many.

Anyone can register to use the preview for free, and marvel at ChatGPT's ability to distill the knowledge of mankind into beautifully written (if often inaccurate) prose.

And yet, and yet...

We are still in the dawn of the AI revolution. Tools like ChatGPT, while having significant utility and range, are still in their infancy and only offer a fraction of the capabilities we'll see in the next several years.

Despite this, my view is that AI is no threat to humanity, other than to our illusions. 

It is an assistive tool, not an invading force. Like other tools it may be put to both positive and negative uses, however it is an extension of humanity's potential, serving our goals and ambitions.

To me AI is a bigger opportunity than even the internet to hold a mirror up to humanity and see ourselves in a new light.

Humanity is enriched by diverse perspectives, but until now these have largely come from other humans. While we've learnt from nature, using evolved designs to inform our own design, we've never co-inhabited the planet with a non-human intelligence equivalent, but different to our own.

AI will draw on all of humanity's knowledge, art and expertise to come to new insights that a human may never consider.

This isn't merely theoretical. It's already been demonstrated by the more primitive AIs we've developed to play games such as Go. When AlphaGo defeated Lee Sedol, the reigning world Go champion 4-1, it taught human players new ways to look at Go and to play the game. Approaches that no human would have ever considered.

Imagine the possibilities that could be unlocked in business and governance by accessing more diverse non-human perspectives. New pathways for improvement will open, and less effective pathways, the illusions that humans are often drawn to, will be exposed.

I use AI daily for many different tasks. In this week alone I've used it to help write a much praised eulogy of her father for my wife, to roleplay a difficult customer service situation to work through remediation options, to develop business and marketing plans, to write songs, answer questions, tell jokes and produce tender responses.

AI will change society. Some jobs will be modified, some new ones will be created. It will be harder for humans to hide truth behind beliefs and comfortable illusions.

And we will be the better for it.

Read full post...

Thursday, February 02, 2023

It's time for Australian government to take artificial intelligence (AI) seriously

Over the last two and a half years I've been deep in a startup using generative artificial intelligence (AI that writes text) to help solve the challenge organisations face in producing and consuming useful content.

This has given me practical insights into the state of the AI industry and how AI technologies can be successfully - or unsuccessfully - implemented within organisations to solve common challenges in the production, repurposing and reuse of content.

So, with a little prompting from the formidable Pia Andrews, I'm taking up blogging again at eGovAU to share some of my experience and insights for government use of AI.

I realise that Australian governments are not new to AI. Many agencies have been using various forms of AI technologies, directly or indirectly, to assist in understanding data or make decisions. 

Some may even include RPA (Robotic Process Automation) and chatbots - which in my humble opinion are not true AI, as they both are designed programmatically and cannot offer insights or resolve problems outside their programmed parameters and intents.

When I talk about AI, my focus is on systems based on machine-learning, where the AI was built from a body of training, evolving its own understanding of context, patterns and relationships.

These 'thinking' machines are capable of leaps of logic (and illogic) beyond any programmed system, which makes them ideal in situations where there are many edge cases, some of which can't be easily predicted or prepared for. It also places them much closer to being general intelligences, and they often exhibit valuable emergent talents alongside their original reasons for being. 

At the same time machine-learning is unsuitable for situations where a decision must be completely explainable. Like humans it is very hard to fully understand how a machine-learning algorithm came to a given conclusion or decision.

As such their utility is not in the realm of automated decision-making, but rather is assistive by encapsulating an evidence base or surfacing details in large datasets that humans might overlook.

As such machine-learning has vast utility for government. 

For example,

  • summarizing reports, 
  • converting complex language into plain, 
  • writing draft minutes from an intended purpose and evidence-base, 
  • extracting insights and conclusions from large research/consultation sets, 
  • crafting hundreds of variants to a message for different audiences and mediums,
  • developing structured strategy and communication plans from unstructured notes,
  • writing and updating policies and tender requests, 
  • semantically mapping and summarizing consultation responses,
  • developing programming code, and
  • assisting in all forms of unstructured engagement and information summarization/repurposing.

As such machine-learning is as an assistive and augmentation tool. Extending the capabilities of humans by doing the heavy lifting, rather than fully automating processes.

It's also critical to recognise that AI of this type isn't the sole purview of IT professionals and data scientists. Working with natural language AIs, as I do, is better supported by a strong business and communications skillset than by programming expertise. 

Designing prompts for an AI (the statements and questions that tell the AI what you want) requires an excellent grasp of language nuances and an extensive vocabulary.

Finetuning these AIs requires a strong understanding of the context of information and what constitutes bias, so that an AI is not inadvertently trained to form unwanted patterns and derive irrelevant or unneeded insights.

These are skills that 'business' folks in government agencies often possess to a far greater degree than most IT teams.


So through my eGovAU blog, I'm going to be regularly covering some of the opportunities and challenges I see for governments in Australia seeking to adopt AI (the machine-learning kind), and initiatives I see other governments adopting.

I will also blog occasionally on other eGov (or digital government) topics, however as this is now well-embedded in government, I'll only do so when there's something new I have to add.

Read full post...

Bookmark and Share