Wednesday, April 05, 2023

What does AI mean for entrants into the job market?

Last week my youngest daughter graduated from university with an IT-related degree (with 2nd degree honours - she did better than me!)

At her graduation ceremony, the occasional speaker, who was a public and not-for-profit sector CTO, congratulated graduates for being some of the last to graduate without the support and integration of large language model AIs in their degrees. She also commented that her speech was partially written by ChatGPT.

While I've used generative AIs in my work and personal life for more than two years, these comments really struck home for me how integrated these AIs will become in everyone's lives.

No-one being educated today should be denied access to AI support to help them to grasp concepts and elevate their work to the highest possible standard.

No-one working in an office should be denied access to support AI to help them perform at their best every day.

Organisations that seek to cut off access to these AIs, because it's somehow 'cheating' - should note how calculators used to be seen as tools for cheating by teachers, and some workplaces resisted the introduction of computers because executives and managers didn't type themselves - they dictated to typing pools.

Indeed I remember, in the last 20 years, working for a Secretary in a very large Federal government department who did not have a computer in her office. She had her EA print all her emails and responded via dictation, which her EA typed up. To present a new look website to her, we had to print dozens of pages and she'd point to a button and ask what happened if the user clicked it.

This doesn't cut the mustard in the AI Age we've now entered. We need to give our students and workers access to the most modern tools to help them perform at peak.

So when I saw the following tweet, I had to ask GPT4 for a response, using her tweet verbatim as the prompt.


This is what GPT4 generated:

As an AI language model, I can understand why some members of Gen Z might be concerned about the impact of AI on their careers and future prospects. However, I believe that AI is not necessarily a threat but can be an opportunity if approached in the right way. Here are a few tips that might help:

  1. Develop skills that are complementary to AI: Instead of competing with AI, focus on developing skills that complement it. For example, skills that require human judgment, empathy, and creativity are still in demand and will continue to be valuable even as AI advances.
  2. Stay curious and keep learning: AI is still a relatively new field, and there is a lot to learn. Keep up with the latest trends and developments, take online courses or attend workshops to learn new skills, and stay curious about how AI is being applied in different industries.
  3. Collaborate with AI: Rather than seeing AI as a competitor, try to work alongside it. AI can be a powerful tool for automating tasks and making processes more efficient, so be open to collaborating with AI in your work.
  4. Embrace change and be adaptable: The job market is constantly changing, and the skills that are in demand today may not be in demand tomorrow. Be prepared to adapt to new technologies and trends, and don't be afraid to take on new challenges and learn new skills.
  5. Develop a growth mindset: Finally, cultivate a growth mindset and embrace the idea that learning and development are ongoing processes. See challenges as opportunities for growth and be open to new experiences and ideas.

Overall, it's important to remember that AI is still in its early stages, and it's difficult to predict exactly how it will impact the job market. However, by staying curious, learning new skills, and embracing change, you can position yourself to thrive in the new world of work.


Now Jenny felt this was a trifle impersonal to use without a human edit - which is fair enough. 

AI shouldn't be used to automate engagement, but to help streamline it. There's always going to be a role for humans to review and adjust messages to match the audience, using the context a human has that an AI does not.

However, it is possible to get AI to add personality - you simply have to tell it what personality you want it to add.

So I generated three further versions...

Using the voice of Albert Einstein:

Intelligence is not a competition between humans and machines, but rather a collaboration. Instead of fearing the advances of artificial intelligence, we should focus on developing complementary skills that require human judgment, empathy, and creativity. One must always remain curious and keep learning, for knowledge is the key to unlocking new opportunities. Embrace change and be adaptable, for the job market is constantly evolving. Finally, approach challenges with a growth mindset and see them as opportunities for personal development.

Using the voice of Arthur C Clarke:

As I've always maintained, the only way to discover the limits of what is possible is to venture past them into the impossible. In the age of artificial intelligence, we should not fear the rise of the machines but embrace it, for it can offer us a new path to knowledge and discovery. Instead of competing with AI, we should focus on developing skills that are complementary to it, such as our unique human abilities of creativity, intuition, and empathy. Learning must be a lifelong pursuit, and we must keep up with the latest advancements to stay relevant in the changing landscape of the job market. The ability to adapt and evolve is paramount in this new era, and we should see challenges as opportunities for growth and transformation.

And, finally, in the voice of Mahatma Gandhi

Artificial intelligence is a new force in our world, and while it may bring challenges, we must approach it with a peaceful and positive mindset. Rather than viewing AI as a threat, we should strive to develop skills that complement it, such as our unique human capacities for creativity, empathy, and compassion. Learning should be a lifelong pursuit, and we must stay curious and open-minded to the possibilities of AI. We should collaborate with AI, not compete with it, for only by working together can we achieve true progress. Let us embrace change and be adaptable, for the world is constantly evolving, and challenges are opportunities for growth. With a growth mindset and a spirit of cooperation, we can create a future where AI and humanity coexist in harmony.

Whether perfectly true to the originals or not, these variations were produced in a few minutes, far faster than a human could perform the same task. 

With AI the door is open to producing content in varied voices and perspectives at great speed.

Tuesday, April 04, 2023

2nd Australian Responsible AI Index launched - calls for government to regulate sooner rather than later

Today marked the unveiling of the 2nd Australian Responsible AI Index, accompanied by urgent appeals for the government to intervene and curb the potential misuse of artificial intelligence (AI). 

The Australian Financial Review provided comprehensive coverage of this critical topic, revealing that a mere 3% of Australian companies are managing the adoption and continuous use of AI in a responsible manner.

As AI permeates almost every facet of business operations, it is crucial that its management and regulation extend beyond vendors and IT teams, ensuring responsible policies are in place for both the business and society as a whole.

The Index report disclosed several key findings:

  • The average Responsible AI Index score for Australian organisations has remained stagnant at 62 out of 100 since 2021.
  • While a significant 82% of respondents believe they are adopting best-practice approaches to AI, a closer look reveals that only 24% are taking conscious steps to guarantee the responsible development of their AI systems.
  • There has been a growth in organisations with an enterprise-wide AI strategy linked to their broader business strategy, with the figure rising from 51% in 2021 to 60%.
  • Among those organisations, only 34% have a CEO who is personally committed to spearheading the AI strategy.
  • Organisations with CEO-led AI strategies boast a higher RAI Index score of 66, compared to a score of 61 for those without direct CEO involvement.
  • A total of 61% of organisations now recognise that the advantages of adopting a responsible AI approach outweigh the associated costs.

The Responsible AI Index serves as a timely reminder for the Australian government to act swiftly in the face of these findings, reinforcing the need for a more responsible approach towards AI implementation across the board.

Italy bans ChatGPT (over privacy concerns)

As the first major action by a nation to limit the spread and use of generative AI, Italy's government has taken the step to formally ban ChatGPT use not only by government employees, but by all Italians.

As reported by the BBC, "the Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. The regulator said it would ban and investigate OpenAI 'with immediate effect'."

While I believe this concern is rooted in a misunderstanding as to how ChatGPT operates - it is a pre-trained AI, that doesn't integrate or learn from the prompts and content entered into it - given that OpenAI does broadly review this injected data for improving the AI's responses means there is enough of a concern for a regulator to want to explore it further.

Certainly I would not advise entering content that is private, confidential or classified into ChatGPT, but except in very specific cases, there's little to no privacy risk of your data being reused or somehow repurposed in nefarious ways.

In contrast the Singaporean government has built a tool using ChatGPT's API to give 90,000 public servants a 'Pair' in Microsoft Word and other applications they can use to accelerate writing tasks. The government has a formal agreement with OpenAI over not using any data prompts in future AI training.


What Italy's decision does herald is that nations should begin considering where their line is for AIs. While most of the current generation of large language models are pre-trained, meaning prompts from humans don't become part of their knowledge base, the next generation may include more capability for continuous finetuning, where information can continually be ingested by these AIs to keep improving their performance.

Specific finetuning is available now for certain AIs, such as OpenAI's GPT3 and AI21's Jurassic, which allows an organisation to finetune the AI to 'weight it' towards delivering better results for their knowledge set or specific goals. 

In government terms, this could mean training an AI on all of Australia's legislation to make it better able to review and write new laws, or on all the public/owned research ona given topic to support policy development processes.

It makes sense for governments to proactively understand the current and projected trajectory of AI (particularly generative AI) and set some policy lines to guide the response if they occur.

This would help industry develop within a safe envelope rather than exploring avenues which governments believe would create problems for society.

The ease at which bias creeps into AI

Removing AI bias is a critical component in the design and training of AI models, however despite the care taken, it can be incredibly easy for bias to creep in.

This is because we often don't see our own biases and, even at a macro level as a species, we may hold biases we are not aware of that fundamentally impact how our AIs perform.

A great example I came across the other week was in image generation AIs, when asked to create a selfie of a group of people. There is a tendency for AI models is to portray a smiling group, whatever their era or cultural background.

This shows a bias in that many groups in history traditionally don't smile in photos - however the western norm has largely become smiling, and so the training sets these AIs use are biased towards smiling, regardless of the culture or timeperiod of the people represented.

AI generated image of French WW1 soldiers selfie

TikTok banned on Australian government devices

The Australian government has finally banned the installation and use of TikTok on government devices.

This has been a long-time coming, with a number of other nations having already banned the use of TikTok on government devices.

With TikTok owned by a Chinese company with very close ties to Chinese government, this has long been an area I've been concerned about, and I'm glad they've finally made this decision.

While the vast majority of social networking tools used by Australians are owned overseas, most are domiciled in nations that have long-term alignments with Australian interests, such as the USA. While there's a similar risk that the US government through legislation could access information that Australia's public servants put on these networks, or weight what our government officials see when they use these platforms, this risk is generally considered low due to the alignments.

With TikTok this alignment was far weaker. Chinese hackers are constantly launching cyberattacks against Australian government agencies and companies, seeking commercial and political advantage.

As such, the risk and consequences of an Australian public servant being compromised or hacked, and information from government systems exposed, were significantly greater with TikTok than other services.