Wednesday, April 29, 2026

Applying the APS Style Manual as a rules engine

I spoke at the IBR Gen AI Transforming Govt PA & Comms conference last week. 


The event handed out copies of the Government Writing Handbook - the APS Style Manual in distilled form - to everyone in the room.

My immediate thought was: Is this available as a GenAI skill yet? Or as a RAG knowledge base? Or in any form a government agency's chosen GenAI language model can comprehensively use as a checkpoint during document generation?

It wasn't. Paper and PDF only. With a blog and a few web pages replicating part of the manual.

However, I did find the Writing style guide for the Singapore Government Design System (SGDS) as a Skill... 

And here's a PDF of the demo, in case your agency can't access Manus.

So I ran a quick test in the room during the 30 minutes before I spoke. Being mindful of the Government Copyright (which wasn't Creative Commons), I extracted a handful of rules from the Writing Handbook, wrapped them in a prompt, and ran a quickly drafted press release through Claude.

The output tightened immediately. Shorter sentences. Active voice. The point surfaced early. It read like something that would get through clearance without being rewritten three times.

Nothing about the model changed. The constraint did the work.

I've written about Rules as Code on this blog before. The argument has always been the same: take policy, legislation and guidance and express it in a form that systems can apply consistently. We've seen this in eligibility engines, compliance checking, and service delivery. The benefits include consistency, transparency and less reliance on individual interpretation under pressure.

This is the same pattern. Applied to government writing.

The Style Manual is a set of rules that every federal public servant should live by. 

Right now, they're expressed as prose, examples and guidance. People interpret and apply them as best they can. Results vary - depending on the writer, the reviewers, the deadline, and how recently they all last read the manual.

However, translate those writing rules into a form a system can execute, and you get consistency at the point of creation.

The test in the room did exactly that. I didn't attempt to ingest the whole manual. Just a handful of rules — plain language, active voice, short sentences, clear structure — enforced as a second pass over the draft.

Python
def draft(prompt):
    return llm.generate(prompt)

def enforce_style(text):
    return llm.generate(f"""
    Rewrite this text to comply with APS writing principles:
    - use plain language
    - prefer active voice
    - keep sentences concise
    - make the purpose clear early

    Text:
    {text}
    """)

Rules as code in its simplest form. The rules are explicit. The system applies them. The output is predictable.

To make it useful at scale, you could structure the manual itself - each rule becomes something the system can retrieve and apply based on context rather than running every rule on every document.

JSON
{
  "rule": "Use plain language",
  "check": "Identify complex terms",
  "rewrite": "Replace with simpler words"
}

Store those rules. Tag them. Retrieve the right ones based on the task.

Writing a brief? Apply structure and clarity rules. 
Writing web content? Apply plain language and accessibility. 
Writing an email? Apply directness and action orientation.

That's a rules engine. The underlying pattern is the same one government has used for years in eligibility and compliance systems.

Wrapped into tools like Microsoft Copilot - which is now rolling into agency workflows - this becomes part of the drafting workflow. 

The user requests a particular type of content (with appropriate context and inputs). The system generates it, applies the relevant rules, and returns something already aligned to APS expectations. 
From the user's perspective, nothing special is happening, but editing for style is far easier and faster.

There's one practical constraint. The Style Manual isn't available under a Creative Commons license, but under Government Copyright 2026. That limits copying and redistribution of the work, not the extraction and structured implementation of its rules.

The source remains authoritative. The system applies a structured interpretation of it. This is, again, exactly what we already do with legislation and policies. We don't expect staff to memorise every clause. We encode the rules and apply them consistently.

Every agency could do this with their CoPilot instance - or Finance could implement it into GovAI centrally and share the ruleset as a skill.md or RAG (Retrieval-Augmented Generation) - essentially a permanent and updateable memory for Generative AIs.

Suddenly everyone using CoPilot within your agency either automatically applies the APS Style Manual in every generation - or it can be applied selectively, based on what they are seeking to generate or via a user setting or prompt.

The result is that the APS Style Manual stops being guidance people try to remember and becomes a checkpoint that authorised Generative AI systems applies every time it is needed - cutting editing and review time.

Plus staff can write their content by hand, and have AI check the style, editing and rewriting where necessary to better meet government writing standards.

Is this a perfect solution? Probably not - yet. AIs still make mistakes and can misapply or fail to apply some style rules. However it improves a basic CoPilot or other GenAI solution for government writing purposes, saving time and raising text quality and productivity.

Read full post...

Tuesday, April 28, 2026

My Presentation from the IBR Conference - The Future of GEN AI for Public Sector Communications and Public Affairs 2026

 Last week, I attended and spoke at the International Business Review (IBR Conferences) event: GEN AI Transforming GOVT PA & COMMS 2026: The Future of GEN AI for Public Sector Communications and Public Affairs 2026 Hybrid Conference.

I've included an excerpt of my presentation notes below for folks who were unable to attend.


The AI Whisperer’s Guide to Practical Deployment

You’re probably here because your organisation is past the question ‘should we use AI?’

Good. Most of us have had that conversation many times.

The question is now: ‘Which battles are worth fighting?’

There are usually more AI opportunities in any organisation than the capacity to pursue them well. 

And the cost of picking the wrong ones isn’t just budget – it’s credibility. Credibility is what gives you the currency to start your next AI project.

I’ve worked with AI in various ways for over 9 years now, building commercial generative AI products, undertaking complex modelling for major infrastructure, and supporting project delivery.

I also have a long history working in Digital, before, during and after Gov 2.0. 

As such I’ve watched some patterns repeat, and others rhyme. Not just in government, in humans.

So let’s dig into practical implementation.


Before anything else, I want to draw a distinction that I think is worth keeping front of mind.

There are often two distinct uses of AI in government work.

There’s AI supporting human judgment – that helps draft, synthesise, analyse and prepare work.

And there’s AI that substitutes for human judgment, that makes or drives decisions.

In practice, it's not always a clean line, but the question is worth asking of every use case: is a human genuinely making the decisions, or are they ratifying what an AI has determined?

Even when an AI excludes or shortlists options, such as when using an AI to screen applicants in a recruitment process, you should consider whether AI bias is driving a decision bias that isn’t defensible.

For different uses, the risks and governance requirements vary. And the consequences of getting them wrong are often very different.

I don’t need to say more than ‘Robodebt’ to make that point.

Human in the loop isn’t just good practice. In the Australian government, it’s increasingly an ethical and legal expectation.

Everything I’m covering today sits on the drafting-and-support side of that line. AI as a contributor. The human still makes the call, still does the editing and still approves the work.

But if you’re working on something that sits closer to the decision-making side, that’s a topic that needs more time than we have today.


So – where do AI investments tend to fall short? In my experience, it comes down to three patterns. And none of them is really about the AI.

The first is the wrong problem.

AI gets applied to a symptom rather than a cause. Or the real problem turns out to be a process gap, a data quality issue, or an ownership question. AI gets suggested as the solution because it’s trendy, safer, easier, less political or more fundable than ‘we need to fix the process.’ We used to see the same with requests to build a website, or create a mobile app. In previous roles in government my job was often to tell people they didn't need to build a new thing, but rather educate them on the digital assets we already had that could be used to meet their goals.

AI won’t necessarily fix a broken process. And it often speeds it up, which can make things far worse, far faster. This is exactly what we saw with digital. Automation can compound success, but also compound failure.

The second is an environment that wasn’t ready.

This can take multiple forms. The data was messier and harder to clean than expected. The permission model created complications that no one had mapped. Governance obligations introduced constraints that only became visible mid-implementation. The workflow proved more sensitive than the business case assumed. Or the people expected to use and engage with the AI weren’t brought along on the journey and may never have wanted it in the first place.

None of these is unusual. They’re normal friction when deploying anything in a complex operating environment. The question is whether you surface them early or late. Some organisations are prepared to take the hit, see staff leave, and things get disrupted before they get better; others pull back and call it a failure.

You really need to know your organisation’s appetite and commitment before leading one of these projects, or you can be left out in the cold.

The third is a lack of an internal owner.

The capability to run, adapt, and govern the AI system either never existed within your organisation or walked out when the people who built it moved on. Nobody inside could improve it later when things changed, or govern it when something went wrong.

That’s a capability-and-ownership question that procurement can’t resolve. And it’s worth thinking about before you sign anything.


I also want to spend a few minutes on something that comes up constantly as a blocker, but where there’s a practical path that many organisations haven’t explored yet.

Data security, but specifically at the front end of the project. How do you test, procure and demonstrate AI systems without exposing sensitive or classified data to vendors?

You can’t load a confidential document into an online AI model to test its capabilities. You can’t hand private citizen data to a vendor for a proof of concept. You often can’t load-test against live operational data or let a bidder build a demo using your grant records or patient data.

This can often kill a potential AI project before it gets started.

But there’s an option worth considering: synthetic data. And AI can build it for you.

That’s not anonymised data, anonymisation has well-documented re-identification risks. Synthetic data is a dataset that is statistically realistic and structurally accurate but contains no real data.

Here’s a concrete example from my prior work.

I needed to load-test a system across a large physical asset network with millions of individual assets. Using real data wasn’t an option. What existed was sensitive, and what didn't (to project future scale) couldn't easily be modelled.

So I used AI to build a city.

Not a digital twin of the actual asset network – a synthetic city, constructed from scratch using known proportions of asset types, realistic density estimates, and plausible growth trajectories.

It could scale to whatever size we needed, model our asset growth over ten to twenty-five years, and produce a test dataset with no connection whatsoever to real infrastructure.

It allowed us to load test at any scale. The vendor never saw real data. The security risk was zero.

A similar approach may work for any system dealing with large datasets where the proportions and structures are well understood, but the specific data is sensitive. Health systems. Grant systems. Infrastructure procurement. Regulatory systems. You could even have AI write tens of thousands of synthetic ministerial briefs and the back-and-forth correspondence across parliament on common topics to test and demo new Parliamentary Document Management Systems.

This gives agencies the opportunity to provide vendors with synthetic datasets — reflecting realistic shapes and adjusted for jurisdiction-specific requirements, but containing no sensitive content and creating no security or privacy exposure. Yes they might see the overall shape of the system – but that’s what they’re providing anyway. If they didn’t know the system’s shape to begin with, they wouldn’t have a product to demonstrate.

Vendors build and demonstrate against the synthetic. You evaluate their system properly. You can even use it to test edge cases and load scenarios for existing systems that you couldn’t safely test using real data.

I think of it as a digital cousin. Looking enough like you to size clothing, but not enough to fool your parents.

It helps reframe security from something that blocks deployment into something that enables it safely.


Next I want to go back to my earlier point. Fighting the right battles.

Think of one AI investment your agency has made or is seriously considering. It doesn’t have to be large. It could be a drafting agent, a customer service chatbot, something for data analysis or procurement. Just hold something real in mind.

I’m going to run through four questions. See how your use case sits against each of them.

First: Need.

Is the problem real, recurring and bounded? Is there enough volume to make it worthwhile? Is the domain clear enough to work with?

Or, being honest here, is the underlying problem actually a process gap, a data quality issue, or an ownership question that’s been reframed as an AI opportunity because that’s where the momentum is?

Catching it early is much less painful than catching it after a commitment is made.

Next: Fit.

Does the proposed solution support the systems, content quality, and governance obligations you actually have, rather than environment a vendor’s demo assumes?

And, coming back to the distinction I drew earlier, is this clearly a drafting and support use? Or has it drifted toward AI making or influencing a decision?

That drift tends to happen quietly. The language shifts from ‘AI will help officers assess’ to ‘the system will flag for review’ to ‘applications below the threshold will be automatically declined.’

Each step is incremental. The cumulative effect is that there’s no human genuinely in the loop anymore.

That’s dangerous territory to step into.

Third: Ownership.

Who inside your organisation will run this, refine it and govern it once it is built and any vendor relationship ends?

Finance’s GovAI initiative is doing solid work building foundational AI capability across the APS, that’s genuinely useful infrastructure, like GovCMS.

For GovCMS, your agency has to own the website at the end of the process. You can’t hand off responsibility to Finance. They’ll keep the system up, but won’t update and improve your content and navigation.

For AI, even when using a GovAI platform, your agency still needs to own the outcome. You need people internally who understand the system well enough to know when it’s going wrong or not keeping up with changing policy and internal needs.

If the ownership question is still being worked out, that’s probably the first thing to resolve. Without internal ownership, the other three questions become someone else’s problem and often get dropped.

Finally: Effect – which you can also read as value.

What actually changes if this initiative works?

Not ‘people will use it more.’ Not ‘staff will find it helpful.’

What changes in citizen experience, policy delivery, workload, risk, consistency or quality? And by how much? And how will you measure that change over time?

If the most compelling metric you can name right now is an adoption rate, it’s worth going a level deeper. Adoption tells you people tried it. It doesn’t tell you what difference it made.


One thought to leave with you that I don’t think anyone has fully worked out yet.

When you build teams that include AI contributors alongside humans, you’re managing a new kind of workforce diversity. Just as changing a single individual in a team can radically alter team dynamics, adding an AI contributor sufficiently advanced to produce decent work and take some load off human team members can also radically alter team dynamics.

This is new, different to any workforce change we’ve seen before in human history. And it requires new management practices.

It isn’t a technology challenge. Your IT leaders can’t offer qualified guidance on how to manage humans and AIs as a unified team. It’s a people leadership challenge.

Organisations that work this out deliberately will get more from the same tools and people.

There isn’t a handbook for it yet. But it is worth thinking about.


Read full post...

Bookmark and Share