Showing posts with label guidelines. Show all posts
Showing posts with label guidelines. Show all posts

Wednesday, April 29, 2026

Applying the APS Style Manual as a rules engine

I spoke at the IBR Gen AI Transforming Govt PA & Comms conference last week. 


The event handed out copies of the Government Writing Handbook - the APS Style Manual in distilled form - to everyone in the room.

My immediate thought was: Is this available as a GenAI skill yet? Or as a RAG knowledge base? Or in any form a government agency's chosen GenAI language model can comprehensively use as a checkpoint during document generation?

It wasn't. Paper and PDF only. With a blog and a few web pages replicating part of the manual.

However, I did find the Writing style guide for the Singapore Government Design System (SGDS) as a Skill... 

And here's a PDF of the demo, in case your agency can't access Manus.

So I ran a quick test in the room during the 30 minutes before I spoke. Being mindful of the Government Copyright (which wasn't Creative Commons), I extracted a handful of rules from the Writing Handbook, wrapped them in a prompt, and ran a quickly drafted press release through Claude.

The output tightened immediately. Shorter sentences. Active voice. The point surfaced early. It read like something that would get through clearance without being rewritten three times.

Nothing about the model changed. The constraint did the work.

I've written about Rules as Code on this blog before. The argument has always been the same: take policy, legislation and guidance and express it in a form that systems can apply consistently. We've seen this in eligibility engines, compliance checking, and service delivery. The benefits include consistency, transparency and less reliance on individual interpretation under pressure.

This is the same pattern. Applied to government writing.

The Style Manual is a set of rules that every federal public servant should live by. 

Right now, they're expressed as prose, examples and guidance. People interpret and apply them as best they can. Results vary - depending on the writer, the reviewers, the deadline, and how recently they all last read the manual.

However, translate those writing rules into a form a system can execute, and you get consistency at the point of creation.

The test in the room did exactly that. I didn't attempt to ingest the whole manual. Just a handful of rules — plain language, active voice, short sentences, clear structure — enforced as a second pass over the draft.

Python
def draft(prompt):
    return llm.generate(prompt)

def enforce_style(text):
    return llm.generate(f"""
    Rewrite this text to comply with APS writing principles:
    - use plain language
    - prefer active voice
    - keep sentences concise
    - make the purpose clear early

    Text:
    {text}
    """)

Rules as code in its simplest form. The rules are explicit. The system applies them. The output is predictable.

To make it useful at scale, you could structure the manual itself - each rule becomes something the system can retrieve and apply based on context rather than running every rule on every document.

JSON
{
  "rule": "Use plain language",
  "check": "Identify complex terms",
  "rewrite": "Replace with simpler words"
}

Store those rules. Tag them. Retrieve the right ones based on the task.

Writing a brief? Apply structure and clarity rules. 
Writing web content? Apply plain language and accessibility. 
Writing an email? Apply directness and action orientation.

That's a rules engine. The underlying pattern is the same one government has used for years in eligibility and compliance systems.

Wrapped into tools like Microsoft Copilot - which is now rolling into agency workflows - this becomes part of the drafting workflow. 

The user requests a particular type of content (with appropriate context and inputs). The system generates it, applies the relevant rules, and returns something already aligned to APS expectations. 
From the user's perspective, nothing special is happening, but editing for style is far easier and faster.

There's one practical constraint. The Style Manual isn't available under a Creative Commons license, but under Government Copyright 2026. That limits copying and redistribution of the work, not the extraction and structured implementation of its rules.

The source remains authoritative. The system applies a structured interpretation of it. This is, again, exactly what we already do with legislation and policies. We don't expect staff to memorise every clause. We encode the rules and apply them consistently.

Every agency could do this with their CoPilot instance - or Finance could implement it into GovAI centrally and share the ruleset as a skill.md or RAG (Retrieval-Augmented Generation) - essentially a permanent and updateable memory for Generative AIs.

Suddenly everyone using CoPilot within your agency either automatically applies the APS Style Manual in every generation - or it can be applied selectively, based on what they are seeking to generate or via a user setting or prompt.

The result is that the APS Style Manual stops being guidance people try to remember and becomes a checkpoint that authorised Generative AI systems applies every time it is needed - cutting editing and review time.

Plus staff can write their content by hand, and have AI check the style, editing and rewriting where necessary to better meet government writing standards.

Is this a perfect solution? Probably not - yet. AIs still make mistakes and can misapply or fail to apply some style rules. However it improves a basic CoPilot or other GenAI solution for government writing purposes, saving time and raising text quality and productivity.

Read full post...

Tuesday, April 04, 2023

Italy bans ChatGPT (over privacy concerns)

As the first major action by a nation to limit the spread and use of generative AI, Italy's government has taken the step to formally ban ChatGPT use not only by government employees, but by all Italians.

As reported by the BBC, "the Italian data-protection authority said there were privacy concerns relating to the model, which was created by US start-up OpenAI and is backed by Microsoft. The regulator said it would ban and investigate OpenAI 'with immediate effect'."

While I believe this concern is rooted in a misunderstanding as to how ChatGPT operates - it is a pre-trained AI, that doesn't integrate or learn from the prompts and content entered into it - given that OpenAI does broadly review this injected data for improving the AI's responses means there is enough of a concern for a regulator to want to explore it further.

Certainly I would not advise entering content that is private, confidential or classified into ChatGPT, but except in very specific cases, there's little to no privacy risk of your data being reused or somehow repurposed in nefarious ways.

In contrast the Singaporean government has built a tool using ChatGPT's API to give 90,000 public servants a 'Pair' in Microsoft Word and other applications they can use to accelerate writing tasks. The government has a formal agreement with OpenAI over not using any data prompts in future AI training.


What Italy's decision does herald is that nations should begin considering where their line is for AIs. While most of the current generation of large language models are pre-trained, meaning prompts from humans don't become part of their knowledge base, the next generation may include more capability for continuous finetuning, where information can continually be ingested by these AIs to keep improving their performance.

Specific finetuning is available now for certain AIs, such as OpenAI's GPT3 and AI21's Jurassic, which allows an organisation to finetune the AI to 'weight it' towards delivering better results for their knowledge set or specific goals. 

In government terms, this could mean training an AI on all of Australia's legislation to make it better able to review and write new laws, or on all the public/owned research ona given topic to support policy development processes.

It makes sense for governments to proactively understand the current and projected trajectory of AI (particularly generative AI) and set some policy lines to guide the response if they occur.

This would help industry develop within a safe envelope rather than exploring avenues which governments believe would create problems for society.

Read full post...

Bookmark and Share