Wednesday, April 29, 2026

Applying the APS Style Manual as a rules engine

I spoke at the IBR Gen AI Transforming Govt PA & Comms conference last week. 

The event handed out copies of the Government Writing Handbook - the APS Style Manual in distilled form - to everyone in the room.

My immediate thought was: Is this available as a GenAI skill yet? Or as a RAG knowledge base? Or in any form a government agency's chosen GenAI language model can comprehensively use as a checkpoint during document generation?

It wasn't. Paper and PDF only. With a blog and a few web pages replicating part of the manual.

So I ran a quick test in the room during the 30 minutes before I spoke. Being mindful of the Government Copyright (which wasn't Creative Commons), I extracted a handful of rules from the Writing Handbook, wrapped them in a prompt, and ran a quickly drafted press release through Claude.

The output tightened immediately. Shorter sentences. Active voice. The point surfaced early. It read like something that would get through clearance without being rewritten three times.

Nothing about the model changed. The constraint did the work.

I've written about Rules as Code on this blog before. The argument has always been the same: take policy, legislation and guidance and express it in a form that systems can apply consistently. We've seen this in eligibility engines, compliance checking, and service delivery. The benefits include consistency, transparency and less reliance on individual interpretation under pressure.

This is the same pattern. Applied to government writing.

The Style Manual is a set of rules that every federal public servant should live by. 

Right now, they're expressed as prose, examples and guidance. People interpret and apply them as best they can. Results vary - depending on the writer, the reviewers, the deadline, and how recently they all last read the manual.

However, translate those writing rules into a form a system can execute, and you get consistency at the point of creation.

The test in the room did exactly that. I didn't attempt to ingest the whole manual. Just a handful of rules — plain language, active voice, short sentences, clear structure — enforced as a second pass over the draft.

Python
def draft(prompt):
    return llm.generate(prompt)

def enforce_style(text):
    return llm.generate(f"""
    Rewrite this text to comply with APS writing principles:
    - use plain language
    - prefer active voice
    - keep sentences concise
    - make the purpose clear early

    Text:
    {text}
    """)

Rules as code in its simplest form. The rules are explicit. The system applies them. The output is predictable.

To make it useful at scale, you could structure the manual itself - each rule becomes something the system can retrieve and apply based on context rather than running every rule on every document.

JSON
{
  "rule": "Use plain language",
  "check": "Identify complex terms",
  "rewrite": "Replace with simpler words"
}

Store those rules. Tag them. Retrieve the right ones based on the task.

Writing a brief? Apply structure and clarity rules. 
Writing web content? Apply plain language and accessibility. 
Writing an email? Apply directness and action orientation.

That's a rules engine. The underlying pattern is the same one government has used for years in eligibility and compliance systems.

Wrapped into tools like Microsoft Copilot - which is now rolling into agency workflows - this becomes part of the drafting workflow. 

The user requests a particular type of content (with appropriate context and inputs). The system generates it, applies the relevant rules, and returns something already aligned to APS expectations. 
From the user's perspective, nothing special is happening, but editing for style is far easier and faster.

There's one practical constraint. The Style Manual isn't available under a Creative Commons license, but under Government Copyright 2026. That limits copying and redistribution of the work, not the extraction and structured implementation of its rules.

The source remains authoritative. The system applies a structured interpretation of it. This is, again, exactly what we already do with legislation and policies. We don't expect staff to memorise every clause. We encode the rules and apply them consistently.

Every agency could do this with their CoPilot instance - or Finance could implement it into GovAI centrally and share the ruleset as a skill.md or RAG (Retrieval-Augmented Generation) - essentially a permanent and updateable memory for Generative AIs.

Suddenly everyone using CoPilot within your agency either automatically applies the APS Style Manual in every generation - or it can be applied selectively, based on what they are seeking to generate or via a user setting or prompt.

The result is that the APS Style Manual stops being guidance people try to remember and becomes a checkpoint that authorised Generative AI systems applies every time it is needed - cutting editing and review time.

Plus staff can write their content by hand, and have AI check the style, editing and rewriting where necessary to better meet government writing standards.

Is this a perfect solution? Probably not - yet. AIs still make mistakes and can misapply or fail to apply some style rules. However it improves a basic CoPilot or other GenAI solution for government writing purposes, saving time and raising text quality and productivity.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Bookmark and Share