The DTA's CEO, Chris Fechner, has advised public servants to be cautious in their use of ChatGPT and other generative AI, as reported by InnovationAus.
This is an unsurprising but positive response. While it suggests public servants use caution, it doesn't close down experimentation and prototyping.
Given how recently generative AI became commercially useful and that most commercial generative AIs are currently based overseas (noting my company has rolled out local prototypes), there are significant confidentiality/security challenges with generative AI for government use, alongside the challenges of accuracy/factualness and quality assurance.
Given I have a public sector background, I began experimenting with these AIs for government use from October 2020. Within a few weeks I was pleasantly surprised at how well an AI such as GPT-3 could produce minutes and briefing papers from associated information, accurately adopting the necessary tone and approach that I had used and encountered during my years in the APS.
Subsequently I've used generative AI to develop simulated laws and policy documents, and to provide insightful advice based on regulations and laws.
This is just the tip of the iceberg for generative AI in government.
I see potential to accelerate the production of significant amounts of internal correspondence, reports, strategies, intranet content and various project, product and user documentation using the assistance of AI.
There's also enormous potential to streamline the production and repurposing of externally focused content; turning reports into media releases, summaries and social posts; supporting engagement processes through the analysis of responses; development and repurposing of communications materials; and much more.
However, it's important to do this within the context of the public service - which means ensuring that the generative AIs used are appropriately trained and finetuned to the needs of an agency.
Also critical is recognising that generative AI, like digital, should not be controlled by IT teams. It is a business solution that requires skills that few IT teams possess. For example, finetuning and prompt engineering both require strong language capabilities and business knowledge to ensure that an AI is appropriately finetuned and prompted to deliver the outcomes required.
Unlike traditional computing, where applications can be programmed to select from a controlled set of options, or a white list used to exclude dangerous or inappropriate options, generative AIs must be trained and guided through this approach - more akin to parenting than programming.
I'm certain that the folks most likely experimenting with generative AI in government are more likely on the business end, than the IT end - as we saw with digital services several decades ago.
And I hope the public sector remembers the lessons from this period and the battles between business and IT are resolved faster and more smoothly than with digital.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.