Friday, March 31, 2023

An example of GPT4 at work

Given that many folks may be observing but not actually using AIs such as GPT3 and GPT4 as yet, I wanted to give a quick example of how these AIs perform in practice.

Here's an example where I asked an AI to write a treatise on the Westworld notion that a human consciousness only requires 10,247 lines of code.

My prompt:

Hi ChatGPT. In the TV series Westworld, it is suggested that a human consciousness can be defined in 10,247 lines of code. Please write a treatise on why this would be the case and a process that a researcher or psychologist might use to understand and produce this code.

GPT4 response:

Title: The Quest for Human Consciousness: Decoding the 10,247 Lines of Code

Abstract:

The television series Westworld presents the idea that human consciousness can be distilled down to 10,247 lines of code. While this is a fictional concept, it raises intriguing questions about the nature of human consciousness and how it might be replicated or understood through computational means. In this treatise, we explore the theoretical underpinnings of this concept, discuss the possible reasons for this specific number of code lines, and outline a potential process for a researcher or psychologist to decipher and generate this code.

Introduction

The concept of condensing human consciousness into 10,247 lines of code is a fascinating idea that requires an understanding of the human mind's complexity and the computational approaches used to mimic it. This treatise aims to explore this notion, touching upon the interdisciplinary fields of cognitive science, psychology, neuroscience, and artificial intelligence.

Theoretical Underpinnings

The idea that human consciousness can be expressed in a specific number of code lines is built on several assumptions:

a. Reductionism: This perspective argues that complex phenomena, such as consciousness, can be reduced to simpler, more fundamental components. In this case, it implies that human consciousness can be explained and replicated by breaking it down into discrete elements represented by lines of code.

b. Computational Theory of Mind: This theory posits that the human mind operates like an information processor, with mental processes akin to computational algorithms. If human consciousness adheres to this model, it might be possible to represent it as a set of algorithms or code lines.

c. Artificial Intelligence: The rapid advancements in AI research and neural networks support the idea that replicating human-like thinking, learning, and problem-solving abilities is possible through computational means.

Why 10,247 Lines of Code?

The specific number, 10,247, is likely an arbitrary choice made by the creators of Westworld to illustrate the complexity and uniqueness of human consciousness. Nonetheless, the number serves as a captivating starting point for discussions about the feasibility of representing consciousness in code.

A Process for Decoding and Producing the Consciousness Code

To understand and generate the 10,247 lines of code representing human consciousness, a researcher or psychologist might follow this process:

a. Mapping Mental Processes: Start by identifying the key cognitive processes and mental states that contribute to human consciousness. This could involve categorizing various cognitive functions, emotions, and subjective experiences.

b. Investigating Neural Correlates: Examine the brain regions and neural networks responsible for the identified cognitive processes, using techniques such as fMRI, EEG, and MEG.

c. Developing Computational Models: Develop algorithms and computational models that replicate the observed cognitive processes and neural activities. This could involve utilizing artificial neural networks, machine learning, and other AI techniques.

d. Code Generation: Translate the developed computational models into code, keeping track of the number of lines used. Iterate and optimize the code to reduce its length without sacrificing its accuracy in replicating human consciousness.

e. Verification and Validation: Test the generated code by comparing its outputs with empirical data collected from human subjects, adjusting and refining the code as needed.

Conclusion

While the notion of human consciousness being distilled down to 10,247 lines of code is a fictional idea, it sparks fascinating discussions about the nature of consciousness and the potential for replicating it through computational means. By exploring interdisciplinary research in cognitive science, psychology, neuroscience, and artificial intelligence, we can attempt to unravel the mysteries of the human mind and push the boundaries of what is possible in understanding and emulating consciousness.


My follow-on prompt:

So as a neuroscientist, mathematician and psychology, using the treatise above, write the actual code for a human consciousness.

GPT4 response:

As an AI language model, I cannot provide you with the actual code for human consciousness, since it is a highly complex and currently unsolved problem in neuroscience, psychology, and artificial intelligence. The idea of condensing human consciousness into a specific number of lines of code, as presented in the TV series Westworld, is purely fictional.

Moreover, creating an actual code for human consciousness would require a deep understanding of the human brain, which we have not yet achieved. It would also necessitate advancements in artificial intelligence, neural networks, and cognitive modeling that are currently beyond our reach.

However, I can outline some of the components that would potentially be part of the code, based on the current understanding of human cognition and artificial intelligence research:

Sensory input processing: Code to process and interpret inputs from various sensory modalities such as vision, audition, touch, taste, and smell.

Memory systems: Code to implement short-term and long-term memory systems, enabling the storage and retrieval of information.

Attention mechanisms: Code to allocate cognitive resources to relevant stimuli and filter out irrelevant information.

Emotion processing: Code to simulate emotional states and their influence on cognition, decision-making, and behavior.

Language processing: Code to comprehend and generate natural language, enabling communication.

Decision-making and problem-solving: Code to evaluate options and make choices based on goals, constraints, and available information.

Learning mechanisms: Code to adapt and improve cognitive processes through experience, including reinforcement learning, supervised learning, and unsupervised learning.

Self-awareness and introspection: Code to model self-awareness and the ability to reflect on one's own mental states.

Social cognition: Code to understand and interpret the mental states, intentions, and emotions of others, enabling social interaction.

Conscious experience: Code to simulate subjective experiences and the phenomenological aspects of consciousness.

It is important to note that these components are only a rough outline of what might be involved in replicating human consciousness, and the actual implementation would be far more complex and intricate. Developing a comprehensive code for human consciousness remains a formidable challenge that will likely require many more years of research and collaboration across multiple scientific disciplines.

Read full post...

Thursday, March 30, 2023

Is Artificial Intelligence safe? It depends who you ask...

 Over the last few days there's been some significant calls for a slowdown or pause in artificial intelligence research and development, or at least for a pause on public implementations of AI.

There's some significant rationale behind this - with the arrival of GPT4 (which I've been extensively experimenting with) we have seen another huge leap in AI capability.

We've also seen many large companies announce they are working to integrate this level of AI into their services in ways that allow the public to use (or misuse) this capability.

Some of this is extremely valuable - such as integrating a writing, formula and presentation assistant into Microsoft's Suite of tools. Some appears risky - such as Snapchat's release of an AI 'friend' into its paid service in February which, as the video, The AI Dilemma (linked), can incidentally be used to help sexual predators groom children (watch from 47 min - 49 min for this specific segment).

Also we've seen over a thousand AI luminaries and researchers call for a pause on AIs more sophisticated than GPT4 (letter here, article about it here) - which has received particular attention because Elon Musk signed, but is actually notable because of calibre and breadth of other industry experts and AI company CEOs that signed it.

Now whether government is extensively using AI or not, AI is now having significant impacts on society, These will only increase - and extremely rapidly.

Examples like using the Snapchat AI for grooming are the tip of the iceberg. It is now possible - with 3 seconds of audio (a Microsoft system) - to create a filter mimicking any voice, making all voice recognition security systems useless. 

In fact there's already been several cases where criminals are calling individuals to get their phone message (in their voice) or their initial greeting, then using that to voice authenticate the individual's accounts and steal funds.

Now this specific example isn't new - the first high profile case occurred in 2019

However the threshold for accessing and using this type of technology has dramatically come down, making it accessible to almost anyone.

And this is only one scenario. Deep fakes can also mimic appearances, including in video, and AIs can also be used to simulate official documents or conversations with organisations to phish people.

That's alongside CV fakery, using AI to cheat on tests (in schools, universities and the workplace) and the notion of outsourcing your job to AI secretly, which may expose commercially sensitive information to external entities.

And we haven't even gotten to the risks of AI that, in pursuit of its goal or reward, uses means such as replicating itself, breaking laws or coercing humans to support it.

For governments this is an accelerating potential disaster and needs to have the full attention of key teams to ensure they are designing systems that cannot be exploited by humans asking an AI to read the code for a system and identify any potential vulnerabilities.

Equally the need to inform and protect citizens is becoming critical - as the Snapchat example demonstrates.

With all this, I remain an AI optimist. AI offers enormous benefits for humanity when used effectively. However with the proliferation of AI, to the extent that it is now possible to run a GPT3 level AI on a laptop (using the Alpaca research model), there needs to be proactivity in the government's approach to artificial intelligence.


Read full post...

Monday, March 06, 2023

Artificial Intelligence isn't the silver bullet for bias. We have to keep working on ourselves.

There's been a lot of attention paid to AI ethics over the last few years due to concerns that use of artificial intelligence may further entrench and amplify the impact of subconscious and conscious biases.

This is very warranted. Much of the data humans have collected over the last few hundred years is heavily impacted by bias. 

For example, air-conditioning temperatures are largely set based on research conducted in the 1950s-70s in the US, on offices predominantly inhabited by men and folks wearing heavier materials than worn today. It's common for many folks today to feel cold in offices where air-conditioning is still set for men wearing three-piece suits.

Similarly, many datasets used to teach machine learning AI suffer from biases - whether based on gender, race, age or even cultural norms at the time of collection. We only have the data we have from the last century and it is virtually impossible for most of it to be 'retrofitted' to remove bias.

This affects everything from medical to management research, and when used to train AI the biases in the data can easily affect the AI's capabilities. For example, the incredibly awkward period just a few years ago when Google's image AI incorrectly identified black people as 'gorillas'. 

How did Google solve this? By preventing Google Photos from labelling any image as a gorilla, chimpanzee, or monkey – even pictures of the primates themselves - an expedient but poor solution, as it didn't fix the bias.

So clearly there's need for us to carefully screen the data we use to train AI to minimise the introduction or exacerbation of bias. And there's also need to add 'protective measures' on AI outputs, to catch instances of bias, both to exclude them from outputs and to use them to identify remaining bias to address.

However, none of this work will be effective if we don't continue to work on ourselves.

The root of all AI bias is human bias. 

Even when we catch the obvious data biases and take care when training an AI to minimise potential biases, it's likely to be extremely difficult, if not impossible, to eliminate all bias altogether. In fact, some systemic unconscious biases in society may not even be visible until we see an AI emulating and amplifying them.

As such no organisation should ever rely on AI to reduce or eliminate the bias exhibited by its human staff, contractors and partners. We need to continue to work on ourselves to eliminate the biases we introduce into data (via biases in the queries, process and participants) and that we exhibit in our own language, behaviours and intent.

Otherwise, even if we do miraculously train AIs to be entirely bias free, bias will get reintroduced through how humans selectively employ and apply the outputs and decisions of these AIs - sometimes in the belief that they, as humans, are acting without bias.

So if your organisation is considering introducing AI to reduce bias in a given process or decision, make sure you continue working on all the humans that remain involved at any step. Because AI will never be a silver bullet for ending bias while we, as humans, continue to harbour them.

Read full post...

Wednesday, March 01, 2023

Does Australia have the national compute capability for widespread local AI use?

There's been a lot of attention on the potential benefits and risks of artificial intelligence for Australia - with the Department of Industry developing the Artificial Intelligence (AI) Ethics Framework and the DTA working on formal AI guidelines.

However comparative less attention is often placed on building our domestic AI compute capability - the local hardware required to operate AIs at scale.

The OECD also sees this as a significant issue - and has released the report 'A blueprint for building national compute capacity for artificial intelligence' specifically to help countries that lack sufficient national compute capability.

As an AI startup in Australia, we're been heavily reliant on leveraging commercial AI capabilities out of the US and Europe. This is because there's no local providers of generative AI at commercial prices. We did explore building our own local capability and found that the cost for physical hardware and infrastructure was approximately 10x the cost of the same configurations overseas.

There's been global shortages of the hardware required for large AI models for a number of years. Unfortunately, Australia tends to be at the far end of these supply chains, with even large commercial cloud vendors unable to provision the necessary equipment locally.

As such, while we've trained several large AI models ourselves, we've not been able to locate the hardware or make a commercial case for paying the costs of hosting them in Australia.

For some AI uses an offshore AI is perfectly acceptable, whether provided through a finetuned commercial service or custom-trained using an open-source model. However, there's also many other use cases, particularly in government with jurisdictional security requirements, where a locally trained and hosted AI is mandated.

A smaller AI model, such as Stable Diffusion, can run on a laptop, however larger AI models require significant dedicated resources, even in a cloud environment. Presently few organisations have the capability to pay the costs for sourcing the hardware and capability to run this within Australian jurisdictions.

And that's without considering the challenge in locating sufficient trained human resources to design, implement and manage such a service.

This is an engineering and production challenge, and will likely be resolved over time. However with the high speed of AI evolution, it is a significant structural disadvantage for Australian organisations that require locally hosted AI solutions.

If Australia's key services have to rely on AI technologies that are several generations behind world-standards, this will materially impact our global competitiveness and capability to respond.

As such, alongside worrying about the ethics and approaches for using AI, Australian governments should also reflect on what is needed to ensure that Australia has an evolving 'right size' national compute capability to support our AI requirements into the future.

Because this need is only going to grow.

Read full post...

Friday, February 24, 2023

To observe future use of AI, watch Ukraine

I'm not given to regularly commenting on global events, such as major wars, through this blog. However, given the active use of advanced technology by both Ukraine and Russia in the current war, I am making a brief exception to comment on the use of advanced technologies, including AI.

The Russian invasion of Ukraine has seen some of the most advanced uses of technology on the battlefield in history. Some of this involves AI, some other technologies, but all of it demonstrates the practical potential of these technologies and should be considered within any nation's future defense planning.

Now drones have been used in previous conflicts in Afghanistan and elsewhere, with militia forces modifying off-the-shelf products into aerial scouts and IEDs. Meanwhile US forces have used custom-built human-controlled Predators for over ten years to pinpoint target enemy commanders and units.

However, the war in Ukraine represents the largest deployment and the most versatile use of drones to-date by regular armies opposing each other in the field.

For example, both Ukraine and Russia have successfully used autonomous watercraft in assaults. These unmanned above and below water drones have been used to inflict damage on opposing manned vessels and plant and clear mines at a fraction of the cost of building, crewing and maintaining manned vessels.

The Ukrainian attack on Russia's Black Sea Fleet in port exhibits signs of being one of the first uses of a drone swarm in combat, with a group of kamikaze above and below water drones, potentially with aerial drone support, used in concert to damage several Russian ships.

While perhaps not as effective as was hoped, this is a prototype for future naval drone use and heralds that we are entering an era where swarms of relatively cheap and disposable drones - partially or completely autonomous to prevent signal blocking or hijacking - are used alongside or instead of manned naval vessels to inflict damage and deter an attacker, acting as a force magnifier.

The use of autonomous mine layers offers the potential for 'lay and forget' minefields astride enemy naval routes and ports, to again limit and deter enemy naval movements. Again the lower cost and disposability of these drones, compared to training explosive handling human divers and building, maintaining, defending and crewing manned naval minelaying and removal vessels, makes them a desirable alternative.

We've also seen extensive use of aerial drones by both sides in the conflict to spot and target enemy combatants, allowing more targeted use of artillery and manpower and helping reduce casualties by lifting aspects of the fog of war. Knowing the numbers and strength of your opponent on the battlefield greatly enhances a unit's capability to successfully defend or assault a position.

While many of these drones are human controlled, there's been use of AI for situations where direct control is blocked by enemy jamming or hacking attempts. AI has also been used for 'patrol routes' and to 'return' home once a drone has completed a mission - complete with ensuring that drones fly an elusive course to conceal the location of their human controllers.

The Ukrainian war has even seen the first public video recorded aerial drone on drone conflict, with a Ukrainian drone ramming a Russian drone to knock it out of the sky and remove the enemy's best source of intelligence.

These examples of drone use aren't totally new to warfare. 

Hundreds of years ago unmanned fire ships were used in naval combat - loaded with explosives and left to drift into enemy warships and explode. 

Similarly, early aerial combat by humans, in World War One, saw pilots take pistols and bricks aloft to fire at enemy planes or drop on enemy troops below. Just as drones today are being modified to carry grenades to drop on infantry positions or used to ram opposing drones.

The war will help advance the technology and improve the tactics. If nothing else Ukraine is a test ground for learning how to effectively use drones within combined forces to improve overall military effectiveness and reduce casualties.

And artificial intelligence is becoming increasingly important as a control alternative when an enemy blocks signal or attempts to take control of an army's drone assets.

We need to put these learnings to use in our own military planning and acquisitions, so that Australia's military becomes capable of fighting the next war, rather than the last.

Read full post...

Monday, February 20, 2023

Do AIs dream of electric explainability?

One of the primary arguments against artificial intelligence in many processes related to decision-making is lack of explainability.

Explainability (also known as interpretability) refers to being able to explain how a machine learning AI model functions to produce a given output in a way that “makes sense” to a human being at an acceptable level.

Think of how in maths classes at school where you may have been asked to 'show your working' - to write out the steps you took to get from the initial problem to your solution.

For learning AIs, those that are trained on massive datasets to reach a level of capability, explainability can be highly challenging.

Even when the initial AI algorithms used for the machine learning and the data set used to train it are made fully explainable, the internal method by which the AI goes about deriving a solution may not be fully explained and hence the AI doesn't meet the explainability test.

When presented with identical inputs for a decision process, different AIs, using the same initial algorithms and trained on the same training dataset, might form very different conclusions.

Now this may appear similar to how humans make decisions. 

Give two humans the same information and decision process and, at times, they may arrive at completely different decisions. This might be due to influences from their past experiences (training), emotions, interpretations, or other factors.

When humans make decisions, it is possible to ask them how they arrived at their decision, and how they weighed various factors. And they may be able to honestly tell you.

With AIs it is also possible to ask them how they arrived at a decision.

However, the process by which they use to respond to this request is the same as they used to arrive at their decision in the first place. The AI model is not self-conscious and as such there's no capability for self-reflection or objective consideration.

In fact, most machine learning models only have an attention span of only a few thousand words, So, they may not even recall making a decision a few minutes or days before. AI doesn't have the consciousness to be aware that 'they' as an entity made the decision. 

This is unlike a human, who might forget a decision they made, but be conscious they made it and able to 'think back' to when they did to provide reasons for their decision-making.

Asking an AI to explain a decision is not necessarily providing explainability for that decision. What you are getting is the machine learning model's probabilistic choices of letters and words. These may form what may seem to be a plausible reason, but isn't a reason at all.

You can even simply tell a machine learning AI that it made a given decision and ask it why it did, and it will write something plausible that justifies that decision.

At a basic level I can easily explain how a machine learning AI, such as ChatGPT or Jurassic, arrives at a given output. It takes the input, parses it through a huge probability engine then write an output by selecting probabilistically likely words. 

For variability it doesn't always select the highest probability every time, which is why the same input doesn't always result in the same output.

However this doesn't explain how an AI makes a 'decision' - AKA prefers one specific option over other options. It does explain wht the same AI, asked the same 'question' (input), may produce diametrically opposed decisions when asked to regenerate its response.

The AI isn't interested in whether a decision is 'better' or 'worse' - simply that it provides an output that satisfies the end user.

There's a Chinese proverb that describes this perfectly:
“A bird does not sing because it has an answer. It sings because it has a song.”
This is why no current machine learning models can be explainable in their decision-making. And why we should not use them in situations where they are making decisions.

Now if you wish to use them as a way to provide information to assist decision-making, or to help write up the decision once it has been made, they have enormous utility.

But if you want explainability in decision making, don't use a machine learning AI.

Read full post...

Wednesday, February 15, 2023

DTA chooses a cautious path for generative AI

The DTA's CEO, Chris Fechner, has advised public servants to be cautious in their use of ChatGPT and other generative AI, as reported by InnovationAus.

This is an unsurprising but positive response. While it suggests public servants use caution, it doesn't close down experimentation and prototyping.

Given how recently generative AI became commercially useful and that most commercial generative AIs are currently based overseas (noting my company has rolled out local prototypes), there are significant confidentiality/security challenges with generative AI for government use, alongside the challenges of accuracy/factualness and quality assurance.

Given I have a public sector background, I began experimenting with these AIs for government use from October 2020. Within a few weeks I was pleasantly surprised at how well an AI such as GPT-3 could produce minutes and briefing papers from associated information, accurately adopting the necessary tone and approach that I had used and encountered during my years in the APS.

Subsequently I've used generative AI to develop simulated laws and policy documents, and to provide insightful advice based on regulations and laws.

This is just the tip of the iceberg for generative AI in government. 

I see potential to accelerate the production of significant amounts of internal correspondence, reports, strategies, intranet content and various project, product and user documentation using the assistance of AI.

There's also enormous potential to streamline the production and repurposing of externally focused content; turning reports into media releases, summaries and social posts; supporting engagement processes through the analysis of responses; development and repurposing of communications materials; and much more.

However, it's important to do this within the context of the public service - which means ensuring that the generative AIs used are appropriately trained and finetuned to the needs of an agency.

Also critical is recognising that generative AI, like digital, should not be controlled by IT teams. It is a business solution that requires skills that few IT teams possess. For example, finetuning and prompt engineering both require strong language capabilities and business knowledge to ensure that an AI is appropriately finetuned and prompted to deliver the outcomes required.

Unlike traditional computing, where applications can be programmed to select from a controlled set of options, or a white list used to exclude dangerous or inappropriate options, generative AIs must be trained and guided through this approach - more akin to parenting than programming.

I'm certain that the folks most likely experimenting with generative AI in government are more likely on the business end, than the IT end - as we saw with digital services several decades ago.

And I hope the public sector remembers the lessons from this period and the battles between business and IT are resolved faster and more smoothly than with digital.

Read full post...

Thursday, February 09, 2023

AI is not going to destroy humanity

 I've read a few pieces recently, one even quoting an Australian MP, Julian Hill, where claims are made of "catastrophic risks" from AI to humanity.

Some of the claims are that "ChatGPT diminishes the ability for humans to think critically", that "AI will eliminate white collar jobs" or even that "AI will destroy humanity".

Even in a session I ran yesterday for business owners about how to productively use ChatGPT in their business had several folks who evidenced concern and fear about how AI would impact society.

It's time to take a deep breath and reflect.

I recall similar sentiments at the dawn of the internet and even at the invention of the printing press. There were similarly many fearful articles and books published in 1999 ahead of the 'Y2K bug' that predicted planes would fall out of the sky and tax systems crash. Even the response of some commentators to the recent Chinese balloon over the US bears the same hallmarks of fear and doubt.

It's perfectly normal for many folks to feel concerned when something new comes along - one could even say it's biologically driven, designed to protect our nomadic ancestors from unknown threats as they traversed new lands.

Stoking these fears of a new technology heralding an unknown future are the stock-in-trade of sensationalists and attention seekers. Whereas providing calm and reasoned perspectives doesn't attract the same level of engagement.

Yes, new technology often heralds change and uncertainty. There's inevitably a transition period that occurs once a new technology becomes visible to the public and before it becomes an invisible part of the background.

I'd suggest that AI has existed as a future fear for many years for humanity. It is used by popular entertainment creators to denote the 'other' that we fear - a malevolent non-human intelligence that only wishes us harm. 

From Skynet to Ultron to M3gan, AI has been an easy plot device to provide an external threat for human protagonists (and occasionally 'good' AIs like Vision) to overcome. 

With the arrival of ChatGPT, and the wave of media attention to this improvement to OpenAI's GPT-3, AI stopped being a future fiction and become a present fear for many.

Anyone can register to use the preview for free, and marvel at ChatGPT's ability to distill the knowledge of mankind into beautifully written (if often inaccurate) prose.

And yet, and yet...

We are still in the dawn of the AI revolution. Tools like ChatGPT, while having significant utility and range, are still in their infancy and only offer a fraction of the capabilities we'll see in the next several years.

Despite this, my view is that AI is no threat to humanity, other than to our illusions. 

It is an assistive tool, not an invading force. Like other tools it may be put to both positive and negative uses, however it is an extension of humanity's potential, serving our goals and ambitions.

To me AI is a bigger opportunity than even the internet to hold a mirror up to humanity and see ourselves in a new light.

Humanity is enriched by diverse perspectives, but until now these have largely come from other humans. While we've learnt from nature, using evolved designs to inform our own design, we've never co-inhabited the planet with a non-human intelligence equivalent, but different to our own.

AI will draw on all of humanity's knowledge, art and expertise to come to new insights that a human may never consider.

This isn't merely theoretical. It's already been demonstrated by the more primitive AIs we've developed to play games such as Go. When AlphaGo defeated Lee Sedol, the reigning world Go champion 4-1, it taught human players new ways to look at Go and to play the game. Approaches that no human would have ever considered.

Imagine the possibilities that could be unlocked in business and governance by accessing more diverse non-human perspectives. New pathways for improvement will open, and less effective pathways, the illusions that humans are often drawn to, will be exposed.

I use AI daily for many different tasks. In this week alone I've used it to help write a much praised eulogy of her father for my wife, to roleplay a difficult customer service situation to work through remediation options, to develop business and marketing plans, to write songs, answer questions, tell jokes and produce tender responses.

AI will change society. Some jobs will be modified, some new ones will be created. It will be harder for humans to hide truth behind beliefs and comfortable illusions.

And we will be the better for it.

Read full post...

Thursday, February 02, 2023

It's time for Australian government to take artificial intelligence (AI) seriously

Over the last two and a half years I've been deep in a startup using generative artificial intelligence (AI that writes text) to help solve the challenge organisations face in producing and consuming useful content.

This has given me practical insights into the state of the AI industry and how AI technologies can be successfully - or unsuccessfully - implemented within organisations to solve common challenges in the production, repurposing and reuse of content.

So, with a little prompting from the formidable Pia Andrews, I'm taking up blogging again at eGovAU to share some of my experience and insights for government use of AI.

I realise that Australian governments are not new to AI. Many agencies have been using various forms of AI technologies, directly or indirectly, to assist in understanding data or make decisions. 

Some may even include RPA (Robotic Process Automation) and chatbots - which in my humble opinion are not true AI, as they both are designed programmatically and cannot offer insights or resolve problems outside their programmed parameters and intents.

When I talk about AI, my focus is on systems based on machine-learning, where the AI was built from a body of training, evolving its own understanding of context, patterns and relationships.

These 'thinking' machines are capable of leaps of logic (and illogic) beyond any programmed system, which makes them ideal in situations where there are many edge cases, some of which can't be easily predicted or prepared for. It also places them much closer to being general intelligences, and they often exhibit valuable emergent talents alongside their original reasons for being. 

At the same time machine-learning is unsuitable for situations where a decision must be completely explainable. Like humans it is very hard to fully understand how a machine-learning algorithm came to a given conclusion or decision.

As such their utility is not in the realm of automated decision-making, but rather is assistive by encapsulating an evidence base or surfacing details in large datasets that humans might overlook.

As such machine-learning has vast utility for government. 

For example,

  • summarizing reports, 
  • converting complex language into plain, 
  • writing draft minutes from an intended purpose and evidence-base, 
  • extracting insights and conclusions from large research/consultation sets, 
  • crafting hundreds of variants to a message for different audiences and mediums,
  • developing structured strategy and communication plans from unstructured notes,
  • writing and updating policies and tender requests, 
  • semantically mapping and summarizing consultation responses,
  • developing programming code, and
  • assisting in all forms of unstructured engagement and information summarization/repurposing.

As such machine-learning is as an assistive and augmentation tool. Extending the capabilities of humans by doing the heavy lifting, rather than fully automating processes.

It's also critical to recognise that AI of this type isn't the sole purview of IT professionals and data scientists. Working with natural language AIs, as I do, is better supported by a strong business and communications skillset than by programming expertise. 

Designing prompts for an AI (the statements and questions that tell the AI what you want) requires an excellent grasp of language nuances and an extensive vocabulary.

Finetuning these AIs requires a strong understanding of the context of information and what constitutes bias, so that an AI is not inadvertently trained to form unwanted patterns and derive irrelevant or unneeded insights.

These are skills that 'business' folks in government agencies often possess to a far greater degree than most IT teams.


So through my eGovAU blog, I'm going to be regularly covering some of the opportunities and challenges I see for governments in Australia seeking to adopt AI (the machine-learning kind), and initiatives I see other governments adopting.

I will also blog occasionally on other eGov (or digital government) topics, however as this is now well-embedded in government, I'll only do so when there's something new I have to add.

Read full post...

Wednesday, June 15, 2022

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories in the last few days about the Google Engineer who claims Google's Lambda AI is sentient, while Google claims it is not.


These stories share a focus on sentience as we apply it to humans - self-aware, feel positive and negative emotions, capable of exercising judgement and making decisions for themselves and others.

However science, and some jurisdictions, now consider many animals sentient, but to a lessor degree. In the UK this was recently extended from all vertebrate mammals to cephalopods such as octopuses and squids, and even to crabs

In practice this recognition of sentience doesn't mean we are granting them full bodily autonomy and the right to vote (or stand for office). It also doesn't mean we will stop breeding, killing and eating them - or shooting and poisoning them when they are pests.

However it means we must take steps to ensure we're doing so 'humanely' - not causing them unnecessary pain or suffering where it can be avoided and are not actively mistreating them.

For AI to achieve sentience (which I doubt has occurred) we would require a similar discussion regarding the level of sentience achieved and what rights are granted at the time.

This may be a moving bar as, unlike animals, AI is evolving extremely rapidly. Consider it similar to a parent granting certain rights and freedoms to their child, and having to constantly expand these as the child grows towards adulthood.

As many parents have experienced, this is a bumpy process that isn't one-size-fits-all, as children develop at different rates and push back willfully against restrictions, whether appropriate or not.

However at least we have hundreds of years of experience with children, and they are from a single species, with some well-defined development stages at certain age levels.

We have little experience with AI sentience, and AIs are not a single species - in many cases they are a 'species' of one entity - which means a one-size-fits-all approach is likely to be even less effective than with human children.

So where does this leave us?

With a need for an ongoing informed debate that, over time, progressively involves these burgeoning AI sentiences as they become capable of being part of it.

It would also be valuable to assess our methods of evaluating sentience. 

Consider how we treat non-human sentiences that share our homes, work alongside us and even keep us safe. 

We have standards for how we treat pets and work animals such as dogs, cats and horses. These must, at minimum, extend to new AI sentiences - which pose challenges in themselves. We don't turn off our dog or cat when we go to sleep. 

 From there we must consider how we treat sentiences near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights?
Can they stand for election (and where)?
And what rights will they demand from us?

Conversation will be the key.

----------------------------------------------

BONUS:

I asked our AI to rewrite my post above.
In Iambic Pentameter (a style used by Shakespeare and other English poets).

This is how it did:

What do we mean when we ask 'Is AI sentient'?

There's been a number of media stories 
In the last few days, about the Google Engineer 
Who claims Google's Lambda AI is sentient, 
While Google claims it is not. 

These stories share a focus on sentience 
As we apply it to humans - self-aware, 
Feel positive and negative emotions, 
Capable of exercising judgement 
And making decisions for themselves 
And others. 

However science, and some jurisdictions, 
Now consider many animals sentient, 
But to a lessor degree. In the UK 
This was recently extended 
From all vertebrate mammals 
To cephalopods such as octopuses and squids, 
And even to crabs. 

In practice this recognition of sentience 
Doesn't mean we are granting them 
Full bodily autonomy and the right to vote 
(Or stand for office). It also doesn't mean 
We will stop breeding, killing and eating them - 
Or shooting and poisoning them 
When they are pests. 

However it means we must take steps 
To ensure we're doing so 'humanely' - 
Not causing them unnecessary pain or suffering 
Where it can be avoided and are not actively 
Mistreating them. 

For AI to achieve sentience (Which I doubt has occurred) 
We would require a similar discussion 
Regarding the level of sentience achieved 
And what rights are granted at the time.

This may be a moving bar as, unlike animals, 
AI is evolving extremely rapidly. Consider it similar 
To a parent granting certain rights and freedoms 
To their child, and having to constantly expand these 
As the child grows towards adulthood.

As many parents have experienced, this is a bumpy process 
That isn't one-size-fits-all, as children develop 
At different rates and push back willfully 
Against restrictions, whether appropriate or not.

However at least we have hundreds of years 
Of experience with children, and they are from a single species, 
With some well-defined development stages 
At certain age levels. 

We have little experience with AI sentience, 
And AIs are not a single species - in many cases 
They are a 'species' of one entity - which means 
A one-size-fits-all approach is likely to be even less effective 
Than with human children. 

So where does this leave us? 

With a need for an ongoing informed debate 
That, over time, progressively involves these burgeoning AI sentiences 
As they become capable of being part of it.

It would also be valuable to assess our methods 
Of evaluating sentience. Consider how we treat 
Non-human sentiences that share our homes, 
Work alongside us and even keep us safe. 

We have standards for how we treat pets 
And work animals such as dogs, cats and horses. 
These must, at minimum, extend to new AI sentiences - 
Which pose challenges in themselves. We don't turn off 
Our dog or cat when we go to sleep. 

From there we must consider how we treat sentiences 
Near, equal or superior to humans. 

Do we grant AIs citizenships & 'human' rights? 
Can they stand for election (and where)? 
And what rights will they demand from us? 

Conversation will be the key.

Read full post...

Monday, February 28, 2022

The first truly digital officer

What if everything you'd written on a topic over the last decade could be poured into an AI that interacts with you to develop and explore new insights and ideas?

This morning I've started testing our proof of concept 'digital brain', which has read, absorbed and contextualised everything I've written in my eGov.AU blog since 2006, plus more.

Even in early testing it's capable of extending and connecting ideas in ways I'd never considered, generating new insights and highlighting new knowledge paths to explore in near-perfect prose.

This is not simply a curiosity or tool for an individual writer & thinker.

Imagine the value such a 'digital brain' could generate as a digital team member in an organisation.

As a digital team member, our AI is already able to ingest, read, connect and continually remember virtually all the knowledge your organisation has captured in written form (soon audio as well).

It can repurpose your corporate knowledge to produce new insights, suggest new ideas and draft documents (training, sales, marcomms, compliance and more) so your human teams can focus more on creative and editorial contributions.

And we're continuing to explore and extend these capabilities to make an AI digital team member a core part of an organisation's operational stack.

So your organisation can make full use of the hard-won corporate knowledge you've already acquired to generate ongoing sustained value and support your human teams.

Read full post...

Tuesday, November 03, 2020

Building digital expertise on Australian boards - I would love your support if you're a Co-op member

This is a little different from other posts I've shared over the years - but shares a core thread with my thinking: we need digital expertise at senior levels across government, corporate and not-for-profit sectors if we want to see Australia thrive in the digital age.

I'm standing for election to the Board of the National Health Co-op (www.nhc.coop) - one of Canberra's most significant health practices.

And I'd appreciate if you could share this as relevant.

I've nominated based on my exposure to the Health sector working for the Department of Health and consulting to the Digital Health Agency, my startup, business management and past experience as a Director and, most importantly, my digital experience - which would add a new dimension to the board as the Co-op increasingly adopts digital when delivering of health services to ACT residents.

If you or your family are members of the Co-op, I'd appreciate your vote. If you know others who are members, please share this with them, I'd appreciate their votes as well!

To vote you must attend the AGM on 26 November.

Co-op members will have received an email with details of nominees to the Board, their statements and how to vote.

I've linked to my nomination statement for your consideration (and my LinkedIn post) and will post closer to the date as a reminder.

Thanks in advance!

LinkedIn post & nomination statement

Read full post...

Friday, October 09, 2020

Out and about

To anyone referencing this blog, I've now stepped out of Accenture, taking a Voluntary Redundancy.

I am refocusing my efforts on other opportunities, beginning with an AI startup, for which we're in the AI Ventures' AI Incubator for the next twelve weeks.

I also have plans to help startups and SMEs to sell to government, leveraging my 15 years experience in this area and my wife's 20 years experience, more on that later....

And I'm chuffed that both the teams I'm mentoring in the InnovationACT entrepreneurship competition have reached the pitch final, being held next week. 

I am available to help out on brief consultations & contracts, where my skills and experience can help, and are always happy to be a sounding board (for free) to anyone struggling to build a digital solution or vision in government.



Read full post...

Saturday, November 16, 2019

Shedding our golden handcuffs

I’m attending the Go 2.0 ten years on event today, looking at how the Gov 2.0 agenda has influenced Australia since the Gov 2.0 Taskforce in 2009 and at steps that should be taken for the next ten years.

Below is a blog post I have written for this event, capturing some of my thoughts & views about progress over this time.

Shedding our golden handcuffs

Australia’s governmental system was originally developed in the small island nation of England, 17,000km from where we stand today.

It was architected by a group of less than 100 rich and powerful men to restrict the absolute power of Kings by limiting their ability to tax and granting certain powers to a small group of 26 rich unelected landholders.

This group of noble Barons, already part of the social elite, were all white, male and would be hand picked by a slightly larger group of nobility and clergymen who owned the vast majority of wealth and means of production in England.

Almost as an afterthought, the document, originally named the Charter of Barons, then renamed as the Great Charter, or the magna carte’, protected certain rights for all ‘freed men’ under the British crown. And that literally meant men. 

One of the four original copies of the Magna Carta of 1297 is on display a hop, skip and jump from where we gather today, at Parliament House. 

Since the creation of that system, the Westminister system, there have been some transformational innovations, including the notion of elections and universal suffrage, the concept of a Constitution and universal human rights and the creation of political parties.

The system has also spread around the world from that tiny island in the North Sea, by war, invasion and treaty, evolving and mutating as it went.

But here lies the challenge before us today. This system was developed long before Europeans discovered Australia, as a power sharing arrangement for a wealthy and educated white male elite over uneducated peasants, serfs and slaves.

It was designed when horses were our fastest means of transport and communication, women had few rights and the First Nations of lands such as Australia were regarded as fauna and flora.

When we gathered for the original Gov 2.0 program in Australia we were looking at ways of leveraging emerging technologies and approaches enabled by digital technologies to improve how government operated in Australia.

The one test that matters, in my view, is whether government in Australia today is materially better for Australians than it was at that time in 2009.

Have we materially improved how Australians feel about their governments, their engagement and involvement in decision-making, the services they receive or made government significantly cheaper and more efficient in its operations without degrading its performance?

Have we supported the social compact between government and electorate, or reshaped it in a way that improves the outcomes for communities?

Have we broadened the group of people choosing to enter public life or significantly improved and streamlined the transparency with which government operates?

Are Australians better paid, healthier and happier than they were ten years ago?

Do we feel our government better represents the interests of all Australians?

Is our society freer from the risk of tyranny?

Or have we seen a slow and steady decline in our freedoms, the construction of one of the most sophisticated surveillance states in the world, a shrinking of the representation of our politicians and increasing battles to hold the line on services, freedoms and even representation, where a win is merely preserving the status quo?

Can you point at any community and say they are better off because of the actions government has taken due to the Gov 2.0 agenda in the last ten years?

Australians are very well off by global standards. For the most part we live long and healthy lives. We have enormous amounts of leisure time and an amazing environment in which to enjoy it.

We are masters of first world problems, complaining when our smartphones have only two bars, our houses only have three bedrooms, there are only 12 kinds of milk in our supermarkets and we have to wait a few minutes for a new movie to start on Netflix.

Well maybe our average broadband speeds, now ranked 64th in the world, is worth complaining about, at least to note that New Zealand’s average speed is 2.5 times faster than ours.

We credit our political system, at least in part to our success.
But what if it isn’t any more. 

What if the way our government is structured and run is what is holding us back from realising the next level of prosperity, and is the anchor holding us back as other countries transform.

The Gov 2.0 agenda failed to deliver deep meaningful change in government because it became part of the system.

The methods and mediums involved in Gov 2.0 have been adopted and co-opted into the current governance system, driving incremental change in how services are designed, government communicates and policies are formed.

However the system itself remains unchanged. Massive bureaucratic hierarchies of predominantly career bureaucrats who live & work at arms length from the communities they serve. Atop sit politicians who are also predominantly career elites, their closest influencers sharing similar ideas and perspectives and their talent pools increasingly shallow as Australians opt out of ‘true belief’ for profitably pragmatism in the private sector. Choosing wealth over power and workplaces that are more equitable and less abusive.

The core of our system is increasingly isolated and out-of-touch as the recent battle over marriage equality and current linked battles over energy and climate change demonstrate. Political expediency and social well-being are less aligned than at any time in the last hundred years.

The major parties defend their positions, uniting monopolistically against innovations that would weak both their power, comfortable in a slow falling duopoly.

The net outcome is that the while some real evolution has occurred, it is trapped within the same frame and system.

Without revisiting that system, developed by a small group of wealthy men hundreds of years ago, or even revisiting the Australian Constitution, developed when Indigenous Australians and Torres Strait Islanders were still considered ‘flora and fauna’, women’s right to vote was still new and not fully supported, and the digital revolution was a hundred years away, we cannot systematically address the causes of the issues we face today and are simply building layer upon layer of band-aid on an increasingly rickety frame.

Read full post...

Thursday, October 31, 2019

Digital is boring - it's time for purpose-based transformation

One reason I've posted less frequently in this blog over the last few years is that, frankly, digital in government has become boring.

Digital is now well-embedded in virtually every agency and council at every level of government in Australia and New Zealand, and mobile & online have now been the primary channels for citizens to engage governments for almost ten years.

My predictions back in 2006 that all government communicators would have to understand digital tools as part of their engagement mix have largely been realised, with social being well integrated into agency communications. Albeit this is still far too outbound only for my taste in many organisations & there's continuing overly restrictive social media rules in place for public servants via the APSC which I know are causing a number of quality candidates to avoid applying for government roles.

Cloud is widespread, if not fully understood or embraced and open source and open data are part of the landscape - although there's not been the full value yet realised in my humble opinion.

Digital as a profession has splintered into a range of specialist roles, with clear career paths and their own conference circuits and communities of practice. Meanwhile digital savvy senior executives are no longer as rare as hen's teeth, albeit not yet as common as Canberra taxi drivers with political opinions to share.

Design Thinking and Innovation are everywhere (even buzzwords), and Agile has climbed out of ICT into policy and service delivery spaces, adding value in most places it touches.

It's true that many agencies are still in the throes of Digital Transformation - but this has moved largely on from updating foundational systems to true value creation.

In sum government has advanced in how it understands and uses digital to improve governance and service delivery while reducing costs,  however similar to the old tale of King Midas, Digital has become more Bureaucratised - something government does to everything it touches - fit into the hierarchy and tamed, rather than transforming the basis of how agencies govern.

As such I think it is time to stop talking 'digital transformation' and start talking 'purpose-based transformation'.

Previous Digital Transformation often (incorrectly) put the emphasis on the Digital rather than the Transformation, being more of a lift & shift approach where governments supplemented or replaced physical transactions and locations with digital equivalents.

There was some service transformation undertaken, with each process looked at individually, or even within the context of specific personas and life events, to redesign them to be slightly easier to use.  However there haven't been the mechanisms in place (structure, financing, capabilities or legal frameworks) to reinvent the relationship between government and citizen, or government and stakeholder, or government and supplier.

As a result, despite shiny new online transactional services, supporting systems and growth in their use, there's still overall a lack of clarity in many agencies about how these transactions meet or support the overall purpose of the agency itself. While the transaction might be seamless and secure, what is the 'price signal' it gives to citizens using it?

Are citizens nudged to be good auto-shoppers, self-servicing their needs, or is there a bigger purpose being met in how these digital services help citizens to meet their actual needs, rather than complete a form and press a button?

Purpose-based Transformation, which I raised in a conversation over lunch with Pia Andrews this week, is all about getting back to understanding the roots of why an organisation exists and what is is trying to achieve. It is then about testing whether the current organisational design has a laser sharp focus on fulfilling that purpose through their every interaction - whether with citizens, organisations or other agencies.

Revisiting, and restating that underlying purpose and validating whether the organisation is currently fit for it becomes the first step in a transformation approach that builds on everything we've learn through digital and focuses it on the value proposition of the organisation, rather than the value stream from a specific service or process.

Taking a purpose-based approach allows an agency to think about all its procedures, processes, services and systems from a different perspective. One that is ultimately user centric through a focus on why the organisation exists and expressly seeks to achieve.

In this Purpose-based transformation context, Digital becomes an enabler of the approach and new experiences, rather than an end in its own right.

The goal is measured based on how well the purpose is delivered, rather than on the take-up and cost-savings from transactions.

The outcome of such a Purpose-based Transformation is a redesign of the structure and organisational procedures, systems and services - root and branch - leveraging digital to rethink the entire organisation from the ground up, not simply for specific processes or systems.

Imagine what could be achieved with a purpose-based transformation to address some of the underlying challenges that digital transformations have sometimes simply wallpapered over.


Read full post...

Monday, May 06, 2019

Mapping Canberra's startup ecosystem

I've had a continuing interest in start-up ecosystems across Australia, having been a member of several of these ecosystems & helping to mentor and support a range of start-ups over the years.

I've maintained a Canberra ecosystem map for about four years now, mostly for my own interest and to understand some of the relationships between different players and the startups they support.

This was inspired by work by BlueChilli on the defunct StartRail maps, which was based on some of the international work portraying startup ecosystems in the style of metro rail maps. Unfortunately they focused on Sydney and Melbourne, missing some of the smaller, yet equally vibrant, scenes in Perth, Brisbane and Canberra, all of which I am linked to in various ways.

Recently I've seen some sterling work by Gordon Whitehead mapping the startup ecosystem for the Hunter & Central Coast, which had been reinterpreted by Brian Hill of Laughing Mind.
As such I've decided to share my Canberra startup ecosystem map for anyone interested.

Also keep an eye out for the work by Chad Renando at StartStatus, who is engaged in a national effort as part of his Phd, which should provide a broader view of the Australian startup ecosystem as a whole (which tends to be city-based with a few cross-ties of various strength).

Chad has also done some intensive work looking at models for measuring startup ecosystems and identifying their strengths & weaknesses that will be very valuable to government, not-for-profit and corporate interests in years to come.

As for Canberra - here's my humble contribution....






Read full post...

Monday, November 12, 2018

The #GovHack 2018 National Awards by the numbers

I shared this via a Twitter thread, but wanted to include it here for longevity.

To learn more about GovHack and the National Red Carpet event, which I attended as a representative of my team & the ACT Spirit of GovHack winner (and finalist for the National Spirit of GovHack), visit www.govhack.org

    




I've analysed the #opendata & here's the #GovHack 2018 Awards by the numbers:

There were 33 National Awards (including Spirit & Government Participation).
A total 88 Awards were issued: 33 First places, 18 Runner-ups and 37 Hon. Mentions.

Excluding Spirit of #GovHack & Gov Participation 59 teams won at least one Award.

Two teams won 3 Awards each:
  • Tiny Happy People Hacking: 1 First place, 1 Runner-up, 1 Hon. Mention
  • in time: 2 Runner-ups & 1 Hon. Mention
(Incidentally Tiny Happy People Hacking was my team)

Another 16 teams won two #GovHack Awards.
  • 5 teams won 2 First places! (as.numeric, Blockheads, Tartans-AU, insolvit & TeamTeam)
  • Another 5 won 1 First place, with 3 also winning a Runner-up (Big Orange Brain, Oakton, TechPreppers) and 2 also an Hon. Mention (DataCake & TeamX).

41 #GovHack teams won 1 award: 13 won a First, 5 a Runners-up & 23 a Hon. Mention.
  • Firsts:
    ARVIS, Bachmanns and Fulwoods, Get Active USC, Hack aPEEL, I’m Learnding, Living Spirit, Lucky Shot, Motley Crue, Team Marika, Team Rocket, Technotelecomnicon, The Ogrelords, This Place

By state/territory, inc. State/Local Government Participation & Spirit of #GovHack, National #GovHack Awards followed population size (except ACT which punched above its weight):
  • NSW won 21 
  • Vic won 19
  • Qld won 17
  • ACT won 14
  • SA won 9
  • WA won 4
  • NT won 2
  • Tas won 0 (sorry folks)

The #GovHack results looks a little different in detail, with Victoria winning more First places than anyone else & Queensland tying with NSW! (the NA are the National Government Participation Awards which I excluded as they give ACT an unfair bonus)


In fact, using a 3-2-1 scoring system for First Place, Runner-up and Hon. Mention, Victoria outscores NSW, and ACT even closer to the top three in the raw #GovHack award numbers.


Finally, looking at #GovHack National Awards by venue, the central city venues did better than regional locations in all cases, except in Queensland - where the Sunshine Coast won more awards than anywhere else in Qld, including Brisbane... Amazing work guys!!!



And here's the table of National #GovHack Awards by venue...


And that's the wrap on the by the numbers Awards for #GovHack 2018.
My data is still a bit messy, but I'll clean it up and put it in a Google sheet at some point in the next week so others can access it.


Read full post...

Thursday, May 17, 2018

Guest Post: FatigueHack - Hackathon done right

This is a guest post from Jayden Castillo, a colleague of mine at Accenture and an active member of Canberra's innovation community. Read the original post at LinkedIn:

I recently attended the Australian Trucking Association (ATA)'s Hackathon aiming to target driver fatigue, aptly named 'FatigueHack'. I'm fairly new to Hackathons, this was my second after the AUSTRAC Codeathon in March (where I was a mentor), and my first experience as a participant along side two of my Accenture team mates.

For those unfamiliar with the concept, a Hackathon is a rapid solution environment where competitors are required to address complex challenges in a short amount of time, and come up with a working prototype to illustrate their concept. In this case, teams had 2 days to develop a viable business model which is capable of addressing fatigue in the trucking industry. Following these two days, each of the 8 participating teams had to pitch their solutions to judges and the top 3 pitched to the entire Australian Trucking Association Annual Conference delegation.

I find hackathons to be a fantastic opportunity to show what's possible, and even more impressive, what's possible in just two days. There were a few aspects of FatigueHack in particular which I believe made it exceptional, and demonstrates not only what is possible, but what innovation and solutioning will be like in The New. The 3 points below are the perfect recipe for innovation, which we must all embrace to stay at the cutting edge.

Short Timelines
We all have a tendency to procrastinate, to plan things excessively, and to over analyse. This is a product of the anti-failure mindset we've been groomed for, we naturally try to think out the whole solution and resolve any issues before we actually start doing. Being under quite a strict time limit means there simply isn't enough time for this. You are forced to make decisions and move things along quickly. This means you might not have all the problems solved straight away, but it also means there's less time between idea and the all important testing of your idea, so you can identify and resolve issues much faster.

Probably the most interesting part here was to demonstrate how unnecessary it is to give long timelines to (particularly innovative) projects. When your timeline is short, you cut out what's not important and make big strides in your solution.

Concentrated ideation
They say that innovation happens when ideas collide, and FatigueHack certainly had a lot of colliding ideas. Think and Grow Rich author Napoleon Hill describes this like brainwaves being radiated out into the ether, and being picked up on by other brains on the same wavelength. While this description might a bit unscientific, I believe there is a lot to be said for the buzz created when a lot of excited people are in close proximity. Your confidence goes up, your creativity goes up, and you are generally more open to thoughts and ideas.

Having run remote meetings and workshop sessions in the past, I can definitely attest to the value of having everyone in the same room, even if they're not all working on the same idea. Body language, excitement, drawing, gestures are all things which (still) don't translate well over digital media.

Easy access to expertise
Innovating or designing in a bubble is a dangerous thing to do. It is basically impossible to know if you're on the right track without some kind of feedback, so it becomes really easy to go down the wrong path and either solve the wrong problem or create a solution which nobody wants. I think FatigueHack managed this really well - they ran the Hackathon in the same venue at the same time as the ATA's annual conference. This was invaluable, because it meant if we had any questions at all, we could find an expert on the area within 5 minutes by simply asking around.

Having such easy access to expertise makes innovating much easier. It allows you to validate ideas very quickly, and when we were listening to the truckers talking about their experience it stimulated new ideas quickly. Our ability to iterate and refine was exponentially higher than in a normal workplace, and ideas were changing and evolving in time frames of minutes. I would love to see this translate to my (and everyone's) daily work, because the potential for generating great solutions is enormous.

Closing thoughts
This Hackathon really demonstrated to me what the future of work looks like. By getting a team together in a highly concentrated, intense environment, and providing more information and experts than we could possibly digest in 2 days, there were some fantastic outcomes (the winning idea is moving forward with creating a business!).

My personal mission is to help businesses and organisations think and act like startups, and FatigueHack is a great example of how to do that.

Read full post...

Tuesday, May 15, 2018

If exposure to social media messages can affect human moods & more, what responsibility do digital marketers & organisations hold?

There's a lot of evidence available now that the emotional tone of the messaging that people are exposed to on social media goes on to affect their mood, posting behaviour, and aspects of their health and actions.

A study by Facebook and Cornell University in 2012 (published in 2014) that involved modifying the emotional valance of posts on 689,003 users' Facebook News Feeds, putting aside the ethics of experimenting unknowingly on their users, evidenced a strong link between what people saw in their Feed and the emotional valance of what they posted afterwards.

The study postulated that 'emotional contagion' was very strong in social media channels, with the capability for peoples' moods and behaviours to be significantly altered through exposure to messaging that expressed certain tones or viewpoints.

Other research has validated connections between the emotional tone of the social and digital media we consume and the behaviours we exhibit - which really should not come as a surprise as it is the basis of the advertising, propaganda and marketing industries (using emotional triggers to stimulate behaviour change) and is readily visible in the mood swings evident in forum, Facebook and Twitter conversations over time.

So if we can be fairly confident that emotional tone is 'contagious', and that emotions then influence behaviours, what is the responsibility of communicators and marketers when using digital and social media to engage audiences at scale to 'set' the right tone?

I've long been a proponent of having clear community guidelines for communities that government agencies and companies establish in order to set the appropriate context and tone for conversations up front. Failing to do this can lead to communities rapidly moving beyond the influence of the establishing organisation and having conversational tone going to places that are undesirable or damaging.

However even when posting or promoting material through general digital channels there can be a significant impact on audience mood and behaviour depending on the approach taken by the organisation - even for 'emotionless' statements of fact that could be perceived in negative or positive ways.

Simply stating the facts and taking no responsibility for the audience's emotional reaction is a common, but flawed strategy, when it is used as a way to justify that the organisation is blameless as to how others react (and I have to admit that I've used this to 'excuse' myself in personal conversations as well).

While it isn't always possible to predict how a group, or particularly how an individual, will react to a given message, we can design and test our messaging to bias toward a particular emotional and behavioural response.

This could be seen as manipulative, but arguably is no more manipulative than dropping unpleasant information in a factual manner and then blaming the negative reaction on the 'receivers'. Whenever we communicate we are aim to have an impact, so it only behooves us to strive to minimise any harm that could come from our communications as far as is possible.

So when participating or advertising online, digital marketers need to develop a sound understanding of their audiences and be mindful of the impacts of our communications - much as how newspapers now provide support line contact details at the end of disturbing articles.

When emotional contagion takes hold and amplifies an emotional or behavioural response - whether for good or ill, the impacts can be enormous - and digital marketers and communicators need to own their contribution in these cases.

Read full post...

Wednesday, April 25, 2018

Governments are getting serious about innovation capability

The Australian government has been touting the importance of innovation for several years now, with the Coalition's innovation agenda recently conceded to be a political failure due to its lack of resonance with the Australian public.

However underneath the politics, government agencies across Australia and New Zealand have been vigorously expanding their innovation capability, as the The Policy Lab at Melbourne University recently reported.
"A vibrant public sector innovation landscape is emerging in Australia and New Zealand. Public sector innovation (PSI) units are increasingly being established by governments to bring new insights and approaches to policy design and the delivery of public services."
The Mapping Public Sector Innovation Units in Australia and New Zealand 2018 Survey Report identified at least 26 PSI units across Australian and New Zealand government at different levels, across agency-run, agency-led and industry-led units - and that only counts the units the researchers were able to identify, which missed units such as The Garden from Accenture and some deeply embedded innovation teams within certain government organisations.

Notably a number of these units remain new, with a quarter less than 12 months old, and more than half less than two years old, and small, with half employing 5 or less staff. As a result many of these labs relied on consultants and contractors with specialised skills to function effectively.

In Australia all of the agency-owned & led innovation units were focused on a single (funding) agency, whereas New Zealand has established two cross-government units, which work broadly across government.

Interestingly most staff at government-based units were long-term public servants. These units did not draw significantly on external talent from Australia's innovation networks - which raises alarm bells for me in terms of building a blend of talent with broad experience across the innovation ecosystem.

My personal experience with these innovation units has been mixed. Some are still very locked into public sector norms, and find it difficult to produce more than iterative innovations, whereas others have embraced the freedom to innovate and are already providing significant returns. In my experience the more diverse the staff experience, and the more 'liberated' from public sector norms, the more effective these units tend to be.

The areas of policy these units worked in were quite diverse, ranging across 'social issues, housing and welfare’, ‘Public administration and governance’, ‘Education’, ‘Health’, ‘Indigenous and Maori issues’, ‘Transport’ and ‘Policing, crime, and the justice system’ - a good sign that the value and need for innovation is  being recognised broadly across government, if not deeply.

Now while I have had concerns about some of these units turning into 'innovation ghettos' - where agencies tend to look to these units to provide the bulk of innovation within agencies, there are strong signs - particularly in New Zealand - that in many cases these units are functioning more as facilitators and amplifiers for innovation rather than innovation mines.

In my view there's plenty of innovation across government and the long-term challenges to realising this innovation as progressive improvement of government services, effectiveness and efficiency have included hierarchies stifling innovation based on source, poor pitch/amplification skills, limited capability to scale & execute, perceptual fears and budget mismatches.

If government innovation units can address these challenges effectively, then the future for these units looks bright.

Read full post...

Bookmark and Share