Wednesday, February 15, 2023

DTA chooses a cautious path for generative AI

The DTA's CEO, Chris Fechner, has advised public servants to be cautious in their use of ChatGPT and other generative AI, as reported by InnovationAus.

This is an unsurprising but positive response. While it suggests public servants use caution, it doesn't close down experimentation and prototyping.

Given how recently generative AI became commercially useful and that most commercial generative AIs are currently based overseas (noting my company has rolled out local prototypes), there are significant confidentiality/security challenges with generative AI for government use, alongside the challenges of accuracy/factualness and quality assurance.

Given I have a public sector background, I began experimenting with these AIs for government use from October 2020. Within a few weeks I was pleasantly surprised at how well an AI such as GPT-3 could produce minutes and briefing papers from associated information, accurately adopting the necessary tone and approach that I had used and encountered during my years in the APS.

Subsequently I've used generative AI to develop simulated laws and policy documents, and to provide insightful advice based on regulations and laws.

This is just the tip of the iceberg for generative AI in government. 

I see potential to accelerate the production of significant amounts of internal correspondence, reports, strategies, intranet content and various project, product and user documentation using the assistance of AI.

There's also enormous potential to streamline the production and repurposing of externally focused content; turning reports into media releases, summaries and social posts; supporting engagement processes through the analysis of responses; development and repurposing of communications materials; and much more.

However, it's important to do this within the context of the public service - which means ensuring that the generative AIs used are appropriately trained and finetuned to the needs of an agency.

Also critical is recognising that generative AI, like digital, should not be controlled by IT teams. It is a business solution that requires skills that few IT teams possess. For example, finetuning and prompt engineering both require strong language capabilities and business knowledge to ensure that an AI is appropriately finetuned and prompted to deliver the outcomes required.

Unlike traditional computing, where applications can be programmed to select from a controlled set of options, or a white list used to exclude dangerous or inappropriate options, generative AIs must be trained and guided through this approach - more akin to parenting than programming.

I'm certain that the folks most likely experimenting with generative AI in government are more likely on the business end, than the IT end - as we saw with digital services several decades ago.

And I hope the public sector remembers the lessons from this period and the battles between business and IT are resolved faster and more smoothly than with digital.

Read full post...

Thursday, February 09, 2023

AI is not going to destroy humanity

 I've read a few pieces recently, one even quoting an Australian MP, Julian Hill, where claims are made of "catastrophic risks" from AI to humanity.

Some of the claims are that "ChatGPT diminishes the ability for humans to think critically", that "AI will eliminate white collar jobs" or even that "AI will destroy humanity".

Even in a session I ran yesterday for business owners about how to productively use ChatGPT in their business had several folks who evidenced concern and fear about how AI would impact society.

It's time to take a deep breath and reflect.

I recall similar sentiments at the dawn of the internet and even at the invention of the printing press. There were similarly many fearful articles and books published in 1999 ahead of the 'Y2K bug' that predicted planes would fall out of the sky and tax systems crash. Even the response of some commentators to the recent Chinese balloon over the US bears the same hallmarks of fear and doubt.

It's perfectly normal for many folks to feel concerned when something new comes along - one could even say it's biologically driven, designed to protect our nomadic ancestors from unknown threats as they traversed new lands.

Stoking these fears of a new technology heralding an unknown future are the stock-in-trade of sensationalists and attention seekers. Whereas providing calm and reasoned perspectives doesn't attract the same level of engagement.

Yes, new technology often heralds change and uncertainty. There's inevitably a transition period that occurs once a new technology becomes visible to the public and before it becomes an invisible part of the background.

I'd suggest that AI has existed as a future fear for many years for humanity. It is used by popular entertainment creators to denote the 'other' that we fear - a malevolent non-human intelligence that only wishes us harm. 

From Skynet to Ultron to M3gan, AI has been an easy plot device to provide an external threat for human protagonists (and occasionally 'good' AIs like Vision) to overcome. 

With the arrival of ChatGPT, and the wave of media attention to this improvement to OpenAI's GPT-3, AI stopped being a future fiction and become a present fear for many.

Anyone can register to use the preview for free, and marvel at ChatGPT's ability to distill the knowledge of mankind into beautifully written (if often inaccurate) prose.

And yet, and yet...

We are still in the dawn of the AI revolution. Tools like ChatGPT, while having significant utility and range, are still in their infancy and only offer a fraction of the capabilities we'll see in the next several years.

Despite this, my view is that AI is no threat to humanity, other than to our illusions. 

It is an assistive tool, not an invading force. Like other tools it may be put to both positive and negative uses, however it is an extension of humanity's potential, serving our goals and ambitions.

To me AI is a bigger opportunity than even the internet to hold a mirror up to humanity and see ourselves in a new light.

Humanity is enriched by diverse perspectives, but until now these have largely come from other humans. While we've learnt from nature, using evolved designs to inform our own design, we've never co-inhabited the planet with a non-human intelligence equivalent, but different to our own.

AI will draw on all of humanity's knowledge, art and expertise to come to new insights that a human may never consider.

This isn't merely theoretical. It's already been demonstrated by the more primitive AIs we've developed to play games such as Go. When AlphaGo defeated Lee Sedol, the reigning world Go champion 4-1, it taught human players new ways to look at Go and to play the game. Approaches that no human would have ever considered.

Imagine the possibilities that could be unlocked in business and governance by accessing more diverse non-human perspectives. New pathways for improvement will open, and less effective pathways, the illusions that humans are often drawn to, will be exposed.

I use AI daily for many different tasks. In this week alone I've used it to help write a much praised eulogy of her father for my wife, to roleplay a difficult customer service situation to work through remediation options, to develop business and marketing plans, to write songs, answer questions, tell jokes and produce tender responses.

AI will change society. Some jobs will be modified, some new ones will be created. It will be harder for humans to hide truth behind beliefs and comfortable illusions.

And we will be the better for it.

Read full post...

Thursday, February 02, 2023

It's time for Australian government to take artificial intelligence (AI) seriously

Over the last two and a half years I've been deep in a startup using generative artificial intelligence (AI that writes text) to help solve the challenge organisations face in producing and consuming useful content.

This has given me practical insights into the state of the AI industry and how AI technologies can be successfully - or unsuccessfully - implemented within organisations to solve common challenges in the production, repurposing and reuse of content.

So, with a little prompting from the formidable Pia Andrews, I'm taking up blogging again at eGovAU to share some of my experience and insights for government use of AI.

I realise that Australian governments are not new to AI. Many agencies have been using various forms of AI technologies, directly or indirectly, to assist in understanding data or make decisions. 

Some may even include RPA (Robotic Process Automation) and chatbots - which in my humble opinion are not true AI, as they both are designed programmatically and cannot offer insights or resolve problems outside their programmed parameters and intents.

When I talk about AI, my focus is on systems based on machine-learning, where the AI was built from a body of training, evolving its own understanding of context, patterns and relationships.

These 'thinking' machines are capable of leaps of logic (and illogic) beyond any programmed system, which makes them ideal in situations where there are many edge cases, some of which can't be easily predicted or prepared for. It also places them much closer to being general intelligences, and they often exhibit valuable emergent talents alongside their original reasons for being. 

At the same time machine-learning is unsuitable for situations where a decision must be completely explainable. Like humans it is very hard to fully understand how a machine-learning algorithm came to a given conclusion or decision.

As such their utility is not in the realm of automated decision-making, but rather is assistive by encapsulating an evidence base or surfacing details in large datasets that humans might overlook.

As such machine-learning has vast utility for government. 

For example,

  • summarizing reports, 
  • converting complex language into plain, 
  • writing draft minutes from an intended purpose and evidence-base, 
  • extracting insights and conclusions from large research/consultation sets, 
  • crafting hundreds of variants to a message for different audiences and mediums,
  • developing structured strategy and communication plans from unstructured notes,
  • writing and updating policies and tender requests, 
  • semantically mapping and summarizing consultation responses,
  • developing programming code, and
  • assisting in all forms of unstructured engagement and information summarization/repurposing.

As such machine-learning is as an assistive and augmentation tool. Extending the capabilities of humans by doing the heavy lifting, rather than fully automating processes.

It's also critical to recognise that AI of this type isn't the sole purview of IT professionals and data scientists. Working with natural language AIs, as I do, is better supported by a strong business and communications skillset than by programming expertise. 

Designing prompts for an AI (the statements and questions that tell the AI what you want) requires an excellent grasp of language nuances and an extensive vocabulary.

Finetuning these AIs requires a strong understanding of the context of information and what constitutes bias, so that an AI is not inadvertently trained to form unwanted patterns and derive irrelevant or unneeded insights.

These are skills that 'business' folks in government agencies often possess to a far greater degree than most IT teams.


So through my eGovAU blog, I'm going to be regularly covering some of the opportunities and challenges I see for governments in Australia seeking to adopt AI (the machine-learning kind), and initiatives I see other governments adopting.

I will also blog occasionally on other eGov (or digital government) topics, however as this is now well-embedded in government, I'll only do so when there's something new I have to add.

Read full post...

Wednesday, June 15, 2022

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories in the last few days about the Google Engineer who claims Google's Lambda AI is sentient, while Google claims it is not.


These stories share a focus on sentience as we apply it to humans - self-aware, feel positive and negative emotions, capable of exercising judgement and making decisions for themselves and others.

However science, and some jurisdictions, now consider many animals sentient, but to a lessor degree. In the UK this was recently extended from all vertebrate mammals to cephalopods such as octopuses and squids, and even to crabs

In practice this recognition of sentience doesn't mean we are granting them full bodily autonomy and the right to vote (or stand for office). It also doesn't mean we will stop breeding, killing and eating them - or shooting and poisoning them when they are pests.

However it means we must take steps to ensure we're doing so 'humanely' - not causing them unnecessary pain or suffering where it can be avoided and are not actively mistreating them.

For AI to achieve sentience (which I doubt has occurred) we would require a similar discussion regarding the level of sentience achieved and what rights are granted at the time.

This may be a moving bar as, unlike animals, AI is evolving extremely rapidly. Consider it similar to a parent granting certain rights and freedoms to their child, and having to constantly expand these as the child grows towards adulthood.

As many parents have experienced, this is a bumpy process that isn't one-size-fits-all, as children develop at different rates and push back willfully against restrictions, whether appropriate or not.

However at least we have hundreds of years of experience with children, and they are from a single species, with some well-defined development stages at certain age levels.

We have little experience with AI sentience, and AIs are not a single species - in many cases they are a 'species' of one entity - which means a one-size-fits-all approach is likely to be even less effective than with human children.

So where does this leave us?

With a need for an ongoing informed debate that, over time, progressively involves these burgeoning AI sentiences as they become capable of being part of it.

It would also be valuable to assess our methods of evaluating sentience. 

Consider how we treat non-human sentiences that share our homes, work alongside us and even keep us safe. 

We have standards for how we treat pets and work animals such as dogs, cats and horses. These must, at minimum, extend to new AI sentiences - which pose challenges in themselves. We don't turn off our dog or cat when we go to sleep. 

 From there we must consider how we treat sentiences near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights?
Can they stand for election (and where)?
And what rights will they demand from us?

Conversation will be the key.

----------------------------------------------

BONUS:

I asked our AI to rewrite my post above.
In Iambic Pentameter (a style used by Shakespeare and other English poets).

This is how it did:

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories 
In the last few days, about the Google Engineer 
Who claims Google's Lambda AI is sentient, 
While Google claims it is not. 

These stories share a focus on sentience 
As we apply it to humans - self-aware, 
Feel positive and negative emotions, 
Capable of exercising judgement 
And making decisions for themselves 
And others. 

However science, and some jurisdictions, 
Now consider many animals sentient, 
But to a lessor degree. In the UK 
This was recently extended 
From all vertebrate mammals 
To cephalopods such as octopuses and squids, 
And even to crabs. 

In practice this recognition of sentience 
Doesn't mean we are granting them 
Full bodily autonomy and the right to vote 
(Or stand for office). It also doesn't mean 
We will stop breeding, killing and eating them - 
Or shooting and poisoning them 
When they are pests. 

However it means we must take steps 
To ensure we're doing so 'humanely' - 
Not causing them unnecessary pain or suffering 
Where it can be avoided and are not actively 
Mistreating them. 

For AI to achieve sentience (Which I doubt has occurred) 
We would require a similar discussion 
Regarding the level of sentience achieved 
And what rights are granted at the time.

This may be a moving bar as, unlike animals, 
AI is evolving extremely rapidly. Consider it similar 
To a parent granting certain rights and freedoms 
To their child, and having to constantly expand these 
As the child grows towards adulthood.

As many parents have experienced, this is a bumpy process 
That isn't one-size-fits-all, as children develop 
At different rates and push back willfully 
Against restrictions, whether appropriate or not.

However at least we have hundreds of years 
Of experience with children, and they are from a single species, 
With some well-defined development stages 
At certain age levels. 

We have little experience with AI sentience, 
And AIs are not a single species - in many cases 
They are a 'species' of one entity - which means 
A one-size-fits-all approach is likely to be even less effective 
Than with human children. 

So where does this leave us? 

With a need for an ongoing informed debate 
That, over time, progressively involves these burgeoning AI sentiences 
As they become capable of being part of it.

It would also be valuable to assess our methods 
Of evaluating sentience. Consider how we treat 
Non-human sentiences that share our homes, 
Work alongside us and even keep us safe. 

We have standards for how we treat pets 
And work animals such as dogs, cats and horses. 
These must, at minimum, extend to new AI sentiences - 
Which pose challenges in themselves. We don't turn off 
Our dog or cat when we go to sleep. 

From there we must consider how we treat sentiences 
Near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights? 
Can they stand for election (and where)? 
And what rights will they demand from us? 

Conversation will be the key.

Read full post...

Monday, February 28, 2022

The first truly digital officer

What if everything you'd written on a topic over the last decade could be poured into an AI that interacts with you to develop and explore new insights and ideas?

This morning I've started testing our proof of concept 'digital brain', which has read, absorbed and contextualised everything I've written in my eGov.AU blog since 2006, plus more.

Even in early testing it's capable of extending and connecting ideas in ways I'd never considered, generating new insights and highlighting new knowledge paths to explore in near-perfect prose.

This is not simply a curiosity or tool for an individual writer & thinker.

Imagine the value such a 'digital brain' could generate as a digital team member in an organisation.

As a digital team member, our AI is already able to ingest, read, connect and continually remember virtually all the knowledge your organisation has captured in written form (soon audio as well).

It can repurpose your corporate knowledge to produce new insights, suggest new ideas and draft documents (training, sales, marcomms, compliance and more) so your human teams can focus more on creative and editorial contributions.

And we're continuing to explore and extend these capabilities to make an AI digital team member a core part of an organisation's operational stack.

So your organisation can make full use of the hard-won corporate knowledge you've already acquired to generate ongoing sustained value and support your human teams.

Read full post...

Bookmark and Share