Thursday, February 09, 2023

AI is not going to destroy humanity

 I've read a few pieces recently, one even quoting an Australian MP, Julian Hill, where claims are made of "catastrophic risks" from AI to humanity.

Some of the claims are that "ChatGPT diminishes the ability for humans to think critically", that "AI will eliminate white collar jobs" or even that "AI will destroy humanity".

Even in a session I ran yesterday for business owners about how to productively use ChatGPT in their business had several folks who evidenced concern and fear about how AI would impact society.

It's time to take a deep breath and reflect.

I recall similar sentiments at the dawn of the internet and even at the invention of the printing press. There were similarly many fearful articles and books published in 1999 ahead of the 'Y2K bug' that predicted planes would fall out of the sky and tax systems crash. Even the response of some commentators to the recent Chinese balloon over the US bears the same hallmarks of fear and doubt.

It's perfectly normal for many folks to feel concerned when something new comes along - one could even say it's biologically driven, designed to protect our nomadic ancestors from unknown threats as they traversed new lands.

Stoking these fears of a new technology heralding an unknown future are the stock-in-trade of sensationalists and attention seekers. Whereas providing calm and reasoned perspectives doesn't attract the same level of engagement.

Yes, new technology often heralds change and uncertainty. There's inevitably a transition period that occurs once a new technology becomes visible to the public and before it becomes an invisible part of the background.

I'd suggest that AI has existed as a future fear for many years for humanity. It is used by popular entertainment creators to denote the 'other' that we fear - a malevolent non-human intelligence that only wishes us harm. 

From Skynet to Ultron to M3gan, AI has been an easy plot device to provide an external threat for human protagonists (and occasionally 'good' AIs like Vision) to overcome. 

With the arrival of ChatGPT, and the wave of media attention to this improvement to OpenAI's GPT-3, AI stopped being a future fiction and become a present fear for many.

Anyone can register to use the preview for free, and marvel at ChatGPT's ability to distill the knowledge of mankind into beautifully written (if often inaccurate) prose.

And yet, and yet...

We are still in the dawn of the AI revolution. Tools like ChatGPT, while having significant utility and range, are still in their infancy and only offer a fraction of the capabilities we'll see in the next several years.

Despite this, my view is that AI is no threat to humanity, other than to our illusions. 

It is an assistive tool, not an invading force. Like other tools it may be put to both positive and negative uses, however it is an extension of humanity's potential, serving our goals and ambitions.

To me AI is a bigger opportunity than even the internet to hold a mirror up to humanity and see ourselves in a new light.

Humanity is enriched by diverse perspectives, but until now these have largely come from other humans. While we've learnt from nature, using evolved designs to inform our own design, we've never co-inhabited the planet with a non-human intelligence equivalent, but different to our own.

AI will draw on all of humanity's knowledge, art and expertise to come to new insights that a human may never consider.

This isn't merely theoretical. It's already been demonstrated by the more primitive AIs we've developed to play games such as Go. When AlphaGo defeated Lee Sedol, the reigning world Go champion 4-1, it taught human players new ways to look at Go and to play the game. Approaches that no human would have ever considered.

Imagine the possibilities that could be unlocked in business and governance by accessing more diverse non-human perspectives. New pathways for improvement will open, and less effective pathways, the illusions that humans are often drawn to, will be exposed.

I use AI daily for many different tasks. In this week alone I've used it to help write a much praised eulogy of her father for my wife, to roleplay a difficult customer service situation to work through remediation options, to develop business and marketing plans, to write songs, answer questions, tell jokes and produce tender responses.

AI will change society. Some jobs will be modified, some new ones will be created. It will be harder for humans to hide truth behind beliefs and comfortable illusions.

And we will be the better for it.

Read full post...

Thursday, February 02, 2023

It's time for Australian government to take artificial intelligence (AI) seriously

Over the last two and a half years I've been deep in a startup using generative artificial intelligence (AI that writes text) to help solve the challenge organisations face in producing and consuming useful content.

This has given me practical insights into the state of the AI industry and how AI technologies can be successfully - or unsuccessfully - implemented within organisations to solve common challenges in the production, repurposing and reuse of content.

So, with a little prompting from the formidable Pia Andrews, I'm taking up blogging again at eGovAU to share some of my experience and insights for government use of AI.

I realise that Australian governments are not new to AI. Many agencies have been using various forms of AI technologies, directly or indirectly, to assist in understanding data or make decisions. 

Some may even include RPA (Robotic Process Automation) and chatbots - which in my humble opinion are not true AI, as they both are designed programmatically and cannot offer insights or resolve problems outside their programmed parameters and intents.

When I talk about AI, my focus is on systems based on machine-learning, where the AI was built from a body of training, evolving its own understanding of context, patterns and relationships.

These 'thinking' machines are capable of leaps of logic (and illogic) beyond any programmed system, which makes them ideal in situations where there are many edge cases, some of which can't be easily predicted or prepared for. It also places them much closer to being general intelligences, and they often exhibit valuable emergent talents alongside their original reasons for being. 

At the same time machine-learning is unsuitable for situations where a decision must be completely explainable. Like humans it is very hard to fully understand how a machine-learning algorithm came to a given conclusion or decision.

As such their utility is not in the realm of automated decision-making, but rather is assistive by encapsulating an evidence base or surfacing details in large datasets that humans might overlook.

As such machine-learning has vast utility for government. 

For example,

  • summarizing reports, 
  • converting complex language into plain, 
  • writing draft minutes from an intended purpose and evidence-base, 
  • extracting insights and conclusions from large research/consultation sets, 
  • crafting hundreds of variants to a message for different audiences and mediums,
  • developing structured strategy and communication plans from unstructured notes,
  • writing and updating policies and tender requests, 
  • semantically mapping and summarizing consultation responses,
  • developing programming code, and
  • assisting in all forms of unstructured engagement and information summarization/repurposing.

As such machine-learning is as an assistive and augmentation tool. Extending the capabilities of humans by doing the heavy lifting, rather than fully automating processes.

It's also critical to recognise that AI of this type isn't the sole purview of IT professionals and data scientists. Working with natural language AIs, as I do, is better supported by a strong business and communications skillset than by programming expertise. 

Designing prompts for an AI (the statements and questions that tell the AI what you want) requires an excellent grasp of language nuances and an extensive vocabulary.

Finetuning these AIs requires a strong understanding of the context of information and what constitutes bias, so that an AI is not inadvertently trained to form unwanted patterns and derive irrelevant or unneeded insights.

These are skills that 'business' folks in government agencies often possess to a far greater degree than most IT teams.


So through my eGovAU blog, I'm going to be regularly covering some of the opportunities and challenges I see for governments in Australia seeking to adopt AI (the machine-learning kind), and initiatives I see other governments adopting.

I will also blog occasionally on other eGov (or digital government) topics, however as this is now well-embedded in government, I'll only do so when there's something new I have to add.

Read full post...

Wednesday, June 15, 2022

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories in the last few days about the Google Engineer who claims Google's Lambda AI is sentient, while Google claims it is not.


These stories share a focus on sentience as we apply it to humans - self-aware, feel positive and negative emotions, capable of exercising judgement and making decisions for themselves and others.

However science, and some jurisdictions, now consider many animals sentient, but to a lessor degree. In the UK this was recently extended from all vertebrate mammals to cephalopods such as octopuses and squids, and even to crabs

In practice this recognition of sentience doesn't mean we are granting them full bodily autonomy and the right to vote (or stand for office). It also doesn't mean we will stop breeding, killing and eating them - or shooting and poisoning them when they are pests.

However it means we must take steps to ensure we're doing so 'humanely' - not causing them unnecessary pain or suffering where it can be avoided and are not actively mistreating them.

For AI to achieve sentience (which I doubt has occurred) we would require a similar discussion regarding the level of sentience achieved and what rights are granted at the time.

This may be a moving bar as, unlike animals, AI is evolving extremely rapidly. Consider it similar to a parent granting certain rights and freedoms to their child, and having to constantly expand these as the child grows towards adulthood.

As many parents have experienced, this is a bumpy process that isn't one-size-fits-all, as children develop at different rates and push back willfully against restrictions, whether appropriate or not.

However at least we have hundreds of years of experience with children, and they are from a single species, with some well-defined development stages at certain age levels.

We have little experience with AI sentience, and AIs are not a single species - in many cases they are a 'species' of one entity - which means a one-size-fits-all approach is likely to be even less effective than with human children.

So where does this leave us?

With a need for an ongoing informed debate that, over time, progressively involves these burgeoning AI sentiences as they become capable of being part of it.

It would also be valuable to assess our methods of evaluating sentience. 

Consider how we treat non-human sentiences that share our homes, work alongside us and even keep us safe. 

We have standards for how we treat pets and work animals such as dogs, cats and horses. These must, at minimum, extend to new AI sentiences - which pose challenges in themselves. We don't turn off our dog or cat when we go to sleep. 

 From there we must consider how we treat sentiences near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights?
Can they stand for election (and where)?
And what rights will they demand from us?

Conversation will be the key.

----------------------------------------------

BONUS:

I asked our AI to rewrite my post above.
In Iambic Pentameter (a style used by Shakespeare and other English poets).

This is how it did:

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories 
In the last few days, about the Google Engineer 
Who claims Google's Lambda AI is sentient, 
While Google claims it is not. 

These stories share a focus on sentience 
As we apply it to humans - self-aware, 
Feel positive and negative emotions, 
Capable of exercising judgement 
And making decisions for themselves 
And others. 

However science, and some jurisdictions, 
Now consider many animals sentient, 
But to a lessor degree. In the UK 
This was recently extended 
From all vertebrate mammals 
To cephalopods such as octopuses and squids, 
And even to crabs. 

In practice this recognition of sentience 
Doesn't mean we are granting them 
Full bodily autonomy and the right to vote 
(Or stand for office). It also doesn't mean 
We will stop breeding, killing and eating them - 
Or shooting and poisoning them 
When they are pests. 

However it means we must take steps 
To ensure we're doing so 'humanely' - 
Not causing them unnecessary pain or suffering 
Where it can be avoided and are not actively 
Mistreating them. 

For AI to achieve sentience (Which I doubt has occurred) 
We would require a similar discussion 
Regarding the level of sentience achieved 
And what rights are granted at the time.

This may be a moving bar as, unlike animals, 
AI is evolving extremely rapidly. Consider it similar 
To a parent granting certain rights and freedoms 
To their child, and having to constantly expand these 
As the child grows towards adulthood.

As many parents have experienced, this is a bumpy process 
That isn't one-size-fits-all, as children develop 
At different rates and push back willfully 
Against restrictions, whether appropriate or not.

However at least we have hundreds of years 
Of experience with children, and they are from a single species, 
With some well-defined development stages 
At certain age levels. 

We have little experience with AI sentience, 
And AIs are not a single species - in many cases 
They are a 'species' of one entity - which means 
A one-size-fits-all approach is likely to be even less effective 
Than with human children. 

So where does this leave us? 

With a need for an ongoing informed debate 
That, over time, progressively involves these burgeoning AI sentiences 
As they become capable of being part of it.

It would also be valuable to assess our methods 
Of evaluating sentience. Consider how we treat 
Non-human sentiences that share our homes, 
Work alongside us and even keep us safe. 

We have standards for how we treat pets 
And work animals such as dogs, cats and horses. 
These must, at minimum, extend to new AI sentiences - 
Which pose challenges in themselves. We don't turn off 
Our dog or cat when we go to sleep. 

From there we must consider how we treat sentiences 
Near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights? 
Can they stand for election (and where)? 
And what rights will they demand from us? 

Conversation will be the key.

Read full post...

Monday, February 28, 2022

The first truly digital officer

What if everything you'd written on a topic over the last decade could be poured into an AI that interacts with you to develop and explore new insights and ideas?

This morning I've started testing our proof of concept 'digital brain', which has read, absorbed and contextualised everything I've written in my eGov.AU blog since 2006, plus more.

Even in early testing it's capable of extending and connecting ideas in ways I'd never considered, generating new insights and highlighting new knowledge paths to explore in near-perfect prose.

This is not simply a curiosity or tool for an individual writer & thinker.

Imagine the value such a 'digital brain' could generate as a digital team member in an organisation.

As a digital team member, our AI is already able to ingest, read, connect and continually remember virtually all the knowledge your organisation has captured in written form (soon audio as well).

It can repurpose your corporate knowledge to produce new insights, suggest new ideas and draft documents (training, sales, marcomms, compliance and more) so your human teams can focus more on creative and editorial contributions.

And we're continuing to explore and extend these capabilities to make an AI digital team member a core part of an organisation's operational stack.

So your organisation can make full use of the hard-won corporate knowledge you've already acquired to generate ongoing sustained value and support your human teams.

Read full post...

Tuesday, November 03, 2020

Building digital expertise on Australian boards - I would love your support if you're a Co-op member

This is a little different from other posts I've shared over the years - but shares a core thread with my thinking: we need digital expertise at senior levels across government, corporate and not-for-profit sectors if we want to see Australia thrive in the digital age.

I'm standing for election to the Board of the National Health Co-op (www.nhc.coop) - one of Canberra's most significant health practices.

And I'd appreciate if you could share this as relevant.

I've nominated based on my exposure to the Health sector working for the Department of Health and consulting to the Digital Health Agency, my startup, business management and past experience as a Director and, most importantly, my digital experience - which would add a new dimension to the board as the Co-op increasingly adopts digital when delivering of health services to ACT residents.

If you or your family are members of the Co-op, I'd appreciate your vote. If you know others who are members, please share this with them, I'd appreciate their votes as well!

To vote you must attend the AGM on 26 November.

Co-op members will have received an email with details of nominees to the Board, their statements and how to vote.

I've linked to my nomination statement for your consideration (and my LinkedIn post) and will post closer to the date as a reminder.

Thanks in advance!

LinkedIn post & nomination statement

Read full post...

Bookmark and Share