Tuesday, April 28, 2026

My Presentation from the IBR Conference - The Future of GEN AI for Public Sector Communications and Public Affairs 2026

 Last week, I attended and spoke at the International Business Review (IBR Conferences) event: GEN AI Transforming GOVT PA & COMMS 2026: The Future of GEN AI for Public Sector Communications and Public Affairs 2026 Hybrid Conference.

I've included an excerpt of my presentation notes below for folks who were unable to attend.


The AI Whisperer’s Guide to Practical Deployment

You’re probably here because your organisation is past the question ‘should we use AI?’

Good. Most of us have had that conversation many times.

The question is now: ‘Which battles are worth fighting?’

There are usually more AI opportunities in any organisation than the capacity to pursue them well. 

And the cost of picking the wrong ones isn’t just budget – it’s credibility. Credibility is what gives you the currency to start your next AI project.

I’ve worked with AI in various ways for over 9 years now, building commercial generative AI products, undertaking complex modelling for major infrastructure, and supporting project delivery.

I also have a long history working in Digital, before, during and after Gov 2.0. 

As such I’ve watched some patterns repeat, and others rhyme. Not just in government, in humans.

So let’s dig into practical implementation.


Before anything else, I want to draw a distinction that I think is worth keeping front of mind.

There are often two distinct uses of AI in government work.

There’s AI supporting human judgment – that helps draft, synthesise, analyse and prepare work.

And there’s AI that substitutes for human judgment, that makes or drives decisions.

In practice, it's not always a clean line, but the question is worth asking of every use case: is a human genuinely making the decisions, or are they ratifying what an AI has determined?

Even when an AI excludes or shortlists options, such as when using an AI to screen applicants in a recruitment process, you should consider whether AI bias is driving a decision bias that isn’t defensible.

For different uses, the risks and governance requirements vary. And the consequences of getting them wrong are often very different.

I don’t need to say more than ‘Robodebt’ to make that point.

Human in the loop isn’t just good practice. In the Australian government, it’s increasingly an ethical and legal expectation.

Everything I’m covering today sits on the drafting-and-support side of that line. AI as a contributor. The human still makes the call, still does the editing and still approves the work.

But if you’re working on something that sits closer to the decision-making side, that’s a topic that needs more time than we have today.


So – where do AI investments tend to fall short? In my experience, it comes down to three patterns. And none of them is really about the AI.

The first is the wrong problem.

AI gets applied to a symptom rather than a cause. Or the real problem turns out to be a process gap, a data quality issue, or an ownership question. AI gets suggested as the solution because it’s trendy, safer, easier, less political or more fundable than ‘we need to fix the process.’ We used to see the same with requests to build a website, or create a mobile app. In previous roles in government my job was often to tell people they didn't need to build a new thing, but rather educate them on the digital assets we already had that could be used to meet their goals.

AI won’t necessarily fix a broken process. And it often speeds it up, which can make things far worse, far faster. This is exactly what we saw with digital. Automation can compound success, but also compound failure.

The second is an environment that wasn’t ready.

This can take multiple forms. The data was messier and harder to clean than expected. The permission model created complications that no one had mapped. Governance obligations introduced constraints that only became visible mid-implementation. The workflow proved more sensitive than the business case assumed. Or the people expected to use and engage with the AI weren’t brought along on the journey and may never have wanted it in the first place.

None of these is unusual. They’re normal friction when deploying anything in a complex operating environment. The question is whether you surface them early or late. Some organisations are prepared to take the hit, see staff leave, and things get disrupted before they get better; others pull back and call it a failure.

You really need to know your organisation’s appetite and commitment before leading one of these projects, or you can be left out in the cold.

The third is a lack of an internal owner.

The capability to run, adapt, and govern the AI system either never existed within your organisation or walked out when the people who built it moved on. Nobody inside could improve it later when things changed, or govern it when something went wrong.

That’s a capability-and-ownership question that procurement can’t resolve. And it’s worth thinking about before you sign anything.


I also want to spend a few minutes on something that comes up constantly as a blocker, but where there’s a practical path that many organisations haven’t explored yet.

Data security, but specifically at the front end of the project. How do you test, procure and demonstrate AI systems without exposing sensitive or classified data to vendors?

You can’t load a confidential document into an online AI model to test its capabilities. You can’t hand private citizen data to a vendor for a proof of concept. You often can’t load-test against live operational data or let a bidder build a demo using your grant records or patient data.

This can often kill a potential AI project before it gets started.

But there’s an option worth considering: synthetic data. And AI can build it for you.

That’s not anonymised data, anonymisation has well-documented re-identification risks. Synthetic data is a dataset that is statistically realistic and structurally accurate but contains no real data.

Here’s a concrete example from my prior work.

I needed to load-test a system across a large physical asset network with millions of individual assets. Using real data wasn’t an option. What existed was sensitive, and what didn't (to project future scale) couldn't easily be modelled.

So I used AI to build a city.

Not a digital twin of the actual asset network – a synthetic city, constructed from scratch using known proportions of asset types, realistic density estimates, and plausible growth trajectories.

It could scale to whatever size we needed, model our asset growth over ten to twenty-five years, and produce a test dataset with no connection whatsoever to real infrastructure.

It allowed us to load test at any scale. The vendor never saw real data. The security risk was zero.

A similar approach may work for any system dealing with large datasets where the proportions and structures are well understood, but the specific data is sensitive. Health systems. Grant systems. Infrastructure procurement. Regulatory systems. You could even have AI write tens of thousands of synthetic ministerial briefs and the back-and-forth correspondence across parliament on common topics to test and demo new Parliamentary Document Management Systems.

This gives agencies the opportunity to provide vendors with synthetic datasets — reflecting realistic shapes and adjusted for jurisdiction-specific requirements, but containing no sensitive content and creating no security or privacy exposure. Yes they might see the overall shape of the system – but that’s what they’re providing anyway. If they didn’t know the system’s shape to begin with, they wouldn’t have a product to demonstrate.

Vendors build and demonstrate against the synthetic. You evaluate their system properly. You can even use it to test edge cases and load scenarios for existing systems that you couldn’t safely test using real data.

I think of it as a digital cousin. Looking enough like you to size clothing, but not enough to fool your parents.

It helps reframe security from something that blocks deployment into something that enables it safely.


Next I want to go back to my earlier point. Fighting the right battles.

Think of one AI investment your agency has made or is seriously considering. It doesn’t have to be large. It could be a drafting agent, a customer service chatbot, something for data analysis or procurement. Just hold something real in mind.

I’m going to run through four questions. See how your use case sits against each of them.

First: Need.

Is the problem real, recurring and bounded? Is there enough volume to make it worthwhile? Is the domain clear enough to work with?

Or, being honest here, is the underlying problem actually a process gap, a data quality issue, or an ownership question that’s been reframed as an AI opportunity because that’s where the momentum is?

Catching it early is much less painful than catching it after a commitment is made.

Next: Fit.

Does the proposed solution support the systems, content quality, and governance obligations you actually have, rather than environment a vendor’s demo assumes?

And, coming back to the distinction I drew earlier, is this clearly a drafting and support use? Or has it drifted toward AI making or influencing a decision?

That drift tends to happen quietly. The language shifts from ‘AI will help officers assess’ to ‘the system will flag for review’ to ‘applications below the threshold will be automatically declined.’

Each step is incremental. The cumulative effect is that there’s no human genuinely in the loop anymore.

That’s dangerous territory to step into.

Third: Ownership.

Who inside your organisation will run this, refine it and govern it once it is built and any vendor relationship ends?

Finance’s GovAI initiative is doing solid work building foundational AI capability across the APS, that’s genuinely useful infrastructure, like GovCMS.

For GovCMS, your agency has to own the website at the end of the process. You can’t hand off responsibility to Finance. They’ll keep the system up, but won’t update and improve your content and navigation.

For AI, even when using a GovAI platform, your agency still needs to own the outcome. You need people internally who understand the system well enough to know when it’s going wrong or not keeping up with changing policy and internal needs.

If the ownership question is still being worked out, that’s probably the first thing to resolve. Without internal ownership, the other three questions become someone else’s problem and often get dropped.

Finally: Effect – which you can also read as value.

What actually changes if this initiative works?

Not ‘people will use it more.’ Not ‘staff will find it helpful.’

What changes in citizen experience, policy delivery, workload, risk, consistency or quality? And by how much? And how will you measure that change over time?

If the most compelling metric you can name right now is an adoption rate, it’s worth going a level deeper. Adoption tells you people tried it. It doesn’t tell you what difference it made.


One thought to leave with you that I don’t think anyone has fully worked out yet.

When you build teams that include AI contributors alongside humans, you’re managing a new kind of workforce diversity. Just as changing a single individual in a team can radically alter team dynamics, adding an AI contributor sufficiently advanced to produce decent work and take some load off human team members can also radically alter team dynamics.

This is new, different to any workforce change we’ve seen before in human history. And it requires new management practices.

It isn’t a technology challenge. Your IT leaders can’t offer qualified guidance on how to manage humans and AIs as a unified team. It’s a people leadership challenge.

Organisations that work this out deliberately will get more from the same tools and people.

There isn’t a handbook for it yet. But it is worth thinking about.


Read full post...

Friday, December 05, 2025

BOM – Flying Above the Radar

Australia's Bureau of Meteorology (BOM) has copped a downpour of headlines for spending A$96.5 million on what some gleefully call “a website.” 

That framing is catchy - but it’s wrong. 

The public‑facing interface - the bit we click - cost about A$4.1 million

The heavy lifting was a complete rebuild and testing of the systems and technology behind it: ingesting vast volumes of observations and model outputs, securing them, and serving them at national scale, reliably, every minute of every day.

In other words, the BOM didn’t buy a slick homepage; it rebuilt the foundation that gets critical environmental intelligence to Australians on a timely basis - whether they’re at the airport, on the farm, or fighting a fire (as I did yesterday).

This effort didn’t happen in isolation. It sits alongside BOM’s seven‑year ROBUST technology program, which upgraded cybersecurity, networks, data centres, a disaster‑recovery supercomputer, and the national observing network (including dual‑polarisation radars). By closure in mid‑2024, ROBUST had invested A$866 million to harden Australia’s environmental intelligence backbone after the bureau’s 2015 cyber breach and outages. That’s the plumbing you don’t see when you load a forecast.

So, what failed? Not the core project goal—modernising national infrastructure—but the communication and release strategy

The new site launched into severe weather, and ordinary users and power users alike found basic tasks harder: radar colours felt off, local details were buried, familiar navigational cues had shifted. 

The backlash was immediate; ministers demanded fixes; BOM reverted radar visuals and committed to rapid improvements. Timing, messaging and usability were misjudged—and they matter.

This is where “low‑hanging fruit” counts. When you’re redeveloping an entire weather reporting and dissemination system for a country, the public expects familiar wins to land early: clearer local pages, consistent radar palettes, quick paths to wind, rain and temperature. BOM missed several of these, and the community told them—loudly. The lesson isn’t “don’t modernise”; it’s design with users, ship the obvious wins first, and narrate the journey.

User testing should never be a checkbox; it’s your pre‑flight weather check. That means beta programs, co‑design with sector users (farmers, SES crews, pilots), and A/B testing for navigation and visuals so you learn what works before you re‑platform at scale. 

If industry best practice says test early, test often, test live - heed it. A handful of controlled experiments can surface friction months before a national launch. 

But why was there so much fuss over plumbing? Because weather intelligence is critical infrastructure. Agriculture uses it to time planting and spraying; fisheries plan around marine forecasts; aviation and defence rely on tailored briefings and feeds for safe operations; emergency services need resilient ingest and rapid warnings when floods and fires escalate. It’s a national capability upgrade serving safety, the economy and national security.

In a world of constrained trust, how you deliver matters almost as much as what you deliver. Publish the cost breakdown early (front end vs back end). Explain the architecture in plain English. Stage releases. Keep the radar legible and familiar. 

And treat feedback as flight data, not turbulence: respond visibly and fast. That’s how you reduce reputational risk while raising technical resilience. It’s also how governments maintain confidence in big tech investments.

If I were coaching an agency on a transformation of this scale, my pragmatic checklist would be:

  • Mission first, interface second. Lead with the purpose: warnings, safety, continuity. Then show the pixels.
  • Ship the obvious wins early. Keep users fed with the basics - clear radar, local conditions, one‑click access to wind/rain - while deeper systems evolve.
  • Co‑design and A/B test. Put farmers, SES, pilots and fishers in the cockpit; instrument the site; run controlled trials before national rollout.
  • Stress‑test comms like the platform. Prepare explainer packs, diagrams and FAQs; brief ministers and media ahead of launch; communicate changes and reversions promptly.
  • Be resilient and transparent. When a severe weather season collides with deployment, delay updates that add risk, and explain why. That’s good ops and good public service.

Bottom line: the BOM didn’t just build a website. It rebuilt resilience. 

The next time you check the radar before a weekend BBQ, remember: that little map is powered by one of the most sophisticated and secure data platforms in the world. And that’s worth every cent.

That said, the public frustration is real, and instructive. The BOM tried to fly below the radar, but forgot that it IS the radar. A critical service Australians rely on and mostly love. The criticism becomes that much harsher when things appear to go wrong with something we love.

However when we combine world‑class infrastructure with world‑class user practice - co‑design, beta, A/B testing, signalling changes, staged releases and plain‑English comms - we can get a platform that stands up to cyclones, cyber threats and criticism alike.

That’s the point of flying above the radar: you see any storms sooner, and can steer accordingly.

Read full post...

Friday, November 22, 2024

Submission regarding under 16 access to social media networks

I’m not terribly sympathetic to the Australian Government’s attempt to ban social media access to under 16s.

There’s many reasons why a blanket approach is bad - from removing parental control to removing support networks from at risk and diverse teens.

However if they’re going to propose it, at least they should have a solid plan on how to implement it effectively in a way that might do some good. Which the current bill doesn’t really offer.

So I worked this morning with ChatGPT to write submission addressing the failings of the amendment bill - and this would be my submission if there was a way to make a submission (which there is not).


Submission to Online Safety Consultation 

By Craig Thomler

Subject: Response to Proposed Social Media Age Restrictions under the Online Safety Amendment (Social Media Minimum Age) Bill 2024

Introduction


The Online Safety Amendment (Social Media Minimum Age) Bill 2024 is a step in the right direction but risks achieving more harm than good in its current form. While its intention to protect children online is commendable, the proposed measures are overly simplistic and disproportionately punitive, failing to address the nuances of a complex digital landscape.

Blanket restrictions won’t stop harm; they’ll push it into unregulated spaces. Small platforms face extinction under these penalties, while big tech barely flinches. The amendment sacrifices privacy in the name of safety, creating new risks instead of reducing them.

To achieve its goals, the amendment must shift from broad-brush penalties to precision policy by integrating smarter, fairer solutions that work for all stakeholders. Through tailored compliance, strengthened privacy protections, and parental involvement, we can build a safer digital environment without alienating users or stifling innovation.

Harms Caused by the Proposed Amendment


1. Negative Impacts on Smaller Platforms
  - Disproportionate Burden: The flat penalties of 30,000 penalty units fail to consider the operational and financial capacity of small-to-medium-sized platforms. These businesses often lack the resources to implement complex age-verification systems, potentially forcing them out of the market.
  - Stifling Innovation: Smaller platforms, many of which cater to niche communities or serve educational purposes, may cease operations due to compliance costs and risks of high penalties.

2. Insufficient Privacy Safeguards

  - Data Mismanagement Risks: Requiring platforms to collect and store sensitive age-verification data increases the risk of breaches, identity theft, or misuse by malicious actors. The amendment does not include clear guidelines on data minimisation or secure destruction.

  - Intrusiveness: Intrusive mechanisms, such as uploading government IDs, can deter users and create additional risks if such data is mishandled.

3. Inadequate Addressing of Online Harms
  - Overfocus on Age Restrictions: By limiting access based solely on age, the amendment fails to address broader issues such as exposure to harmful content, algorithmic manipulation, and the role of content moderation.
  - Circumvention Risks: Children can easily bypass age restrictions by creating fake accounts or using anonymisation tools (e.g., VPNs), undermining the efficacy of the measures.

4. Alienation and Inequity
  - Exclusion from Social Development: Adolescents (13–16 years) rely heavily on social media for peer interaction, cultural participation, and identity development. Blanket restrictions risk isolating them from their social circles, causing feelings of exclusion and rebellion.
  - Reduced Parental Autonomy: The amendment removes discretion from parents and guardians, who are better positioned to decide when and how their children engage with social media.

5. Unfair Application of Penalties
  - Ineffectiveness for Larger Platforms: For major platforms (e.g., Meta, TikTok), the flat penalty is negligible compared to their revenue. This reduces the amendment’s deterrent effect and gives larger platforms a competitive advantage over smaller networks.

Proposed Improvements


1. Revised Penalty Structure

  Introduce a sliding-scale penalty system based on platform revenue, user base, and level of non-compliance:

  - Platforms with annual global revenue under $5 million AUD: Maximum fine of 1,000 penalty units.
  - Platforms with annual global revenue between $5 million and $100 million AUD:
    - Base penalty: 2% of Australian revenue.
    - Additional penalties for repeated breaches: Public accountability measures (e.g., public notices or restrictions on ad targeting in Australia).
  - Platforms with annual global revenue exceeding $100 million AUD:
    - Base penalty: 5% of Australian revenue or $10 million AUD, whichever is greater.
    - Additional penalties for repeated breaches:
      - Public reporting of non-compliance on government and platform websites.
      - Platform-wide visibility restrictions in Australia (e.g., throttling algorithmic content recommendations for non-compliant platforms).

2. Verified Parent/Guardian Discretion
  Introduce a Parental Consent Framework to allow parents/guardians to grant access to age-restricted platforms for children under 16:
  - Consent must be verified through secure processes, such as:
    - Linking to a verified parental account.
    - Submitting proof of guardianship alongside consent forms.
  - Platforms must enable granular parental controls, allowing guardians to monitor and manage their child’s activity.
  - Require platforms to provide education resources for parents on managing online safety risks.

3. Government-Endorsed Digital Age Verification System (DAVS)
  - Develop a Digital Age Verification System (DAVS) that:
    - Verifies user ages anonymously through tokens or hashed data.
    - Provides a standardised, privacy-focused solution for all platforms, reducing compliance costs.
  - DAVS integration to be mandatory for large platforms, while optional (but subsidised) for smaller ones.

4. Strengthened Privacy Protections
  - Mandate data minimisation practices:
    - Platforms must only collect data strictly necessary for age verification.
    - All verification data must be encrypted and destroyed within 14 days of account approval.
  - Introduce regular privacy audits overseen by the Office of the Australian Information Commissioner (OAIC), with public reporting of results.
  - Prohibit platforms from using age-verification data for advertising or algorithmic purposes.

5. Non-Monetary Penalties
  - Public Notices of Non-Compliance: Published on the eSafety Commissioner’s website and the platform’s Australian homepage.
  - Operational Restrictions: For repeated breaches, restrict platform visibility (e.g., content distribution) until compliance is demonstrated.
  - Mandatory User Notifications: Platforms must notify all Australian users about their non-compliance and remediation steps.

6. Grant Support for Smaller Platforms
  - Establish a Small Platform Support Fund to assist platforms with annual global revenue under $5 million AUD in adopting compliant age-verification systems:
    - Grants covering up to 80% of compliance costs (e.g., DAVS integration or equivalent).
    - Eligibility contingent on adherence to a simplified compliance framework.

7. Education and Awareness Campaigns
  - Fund national education initiatives to promote:
    - Digital literacy for children and parents.
    - Awareness of online safety resources and the amendment’s goals.
  - Collaborate with schools to integrate digital safety into curricula.

Conclusion


The Online Safety Amendment (Social Media Minimum Age) Bill 2024, while well-intentioned, risks significant harm if implemented in its current form. The proposed measures disproportionately impact smaller platforms, fail to adequately protect privacy, and overlook the broader risks of online harm and exclusion.

To create an effective and equitable framework, I strongly urge the adoption of the proposed improvements in this submission. These include proportional penalties, a government-endorsed age verification system, strengthened privacy protections, and parental involvement. By addressing these gaps, we can achieve a balanced policy that protects vulnerable users without stifling innovation or overburdening businesses.

I look forward to seeing a revised and refined framework that aligns with these principles.

 Thank you for considering this submission.



Read full post...

Wednesday, April 05, 2023

What does AI mean for entrants into the job market?

Last week my youngest daughter graduated from university with an IT-related degree (with 2nd degree honours - she did better than me!)

At her graduation ceremony, the occasional speaker, who was a public and not-for-profit sector CTO, congratulated graduates for being some of the last to graduate without the support and integration of large language model AIs in their degrees. She also commented that her speech was partially written by ChatGPT.

While I've used generative AIs in my work and personal life for more than two years, these comments really struck home for me how integrated these AIs will become in everyone's lives.

No-one being educated today should be denied access to AI support to help them to grasp concepts and elevate their work to the highest possible standard.

No-one working in an office should be denied access to support AI to help them perform at their best every day.

Organisations that seek to cut off access to these AIs, because it's somehow 'cheating' - should note how calculators used to be seen as tools for cheating by teachers, and some workplaces resisted the introduction of computers because executives and managers didn't type themselves - they dictated to typing pools.

Indeed I remember, in the last 20 years, working for a Secretary in a very large Federal government department who did not have a computer in her office. She had her EA print all her emails and responded via dictation, which her EA typed up. To present a new look website to her, we had to print dozens of pages and she'd point to a button and ask what happened if the user clicked it.

This doesn't cut the mustard in the AI Age we've now entered. We need to give our students and workers access to the most modern tools to help them perform at peak.

So when I saw the following tweet, I had to ask GPT4 for a response, using her tweet verbatim as the prompt.


This is what GPT4 generated:

As an AI language model, I can understand why some members of Gen Z might be concerned about the impact of AI on their careers and future prospects. However, I believe that AI is not necessarily a threat but can be an opportunity if approached in the right way. Here are a few tips that might help:

  1. Develop skills that are complementary to AI: Instead of competing with AI, focus on developing skills that complement it. For example, skills that require human judgment, empathy, and creativity are still in demand and will continue to be valuable even as AI advances.
  2. Stay curious and keep learning: AI is still a relatively new field, and there is a lot to learn. Keep up with the latest trends and developments, take online courses or attend workshops to learn new skills, and stay curious about how AI is being applied in different industries.
  3. Collaborate with AI: Rather than seeing AI as a competitor, try to work alongside it. AI can be a powerful tool for automating tasks and making processes more efficient, so be open to collaborating with AI in your work.
  4. Embrace change and be adaptable: The job market is constantly changing, and the skills that are in demand today may not be in demand tomorrow. Be prepared to adapt to new technologies and trends, and don't be afraid to take on new challenges and learn new skills.
  5. Develop a growth mindset: Finally, cultivate a growth mindset and embrace the idea that learning and development are ongoing processes. See challenges as opportunities for growth and be open to new experiences and ideas.

Overall, it's important to remember that AI is still in its early stages, and it's difficult to predict exactly how it will impact the job market. However, by staying curious, learning new skills, and embracing change, you can position yourself to thrive in the new world of work.


Now Jenny felt this was a trifle impersonal to use without a human edit - which is fair enough. 

AI shouldn't be used to automate engagement, but to help streamline it. There's always going to be a role for humans to review and adjust messages to match the audience, using the context a human has that an AI does not.

However, it is possible to get AI to add personality - you simply have to tell it what personality you want it to add.

So I generated three further versions...

Using the voice of Albert Einstein:

Intelligence is not a competition between humans and machines, but rather a collaboration. Instead of fearing the advances of artificial intelligence, we should focus on developing complementary skills that require human judgment, empathy, and creativity. One must always remain curious and keep learning, for knowledge is the key to unlocking new opportunities. Embrace change and be adaptable, for the job market is constantly evolving. Finally, approach challenges with a growth mindset and see them as opportunities for personal development.

Using the voice of Arthur C Clarke:

As I've always maintained, the only way to discover the limits of what is possible is to venture past them into the impossible. In the age of artificial intelligence, we should not fear the rise of the machines but embrace it, for it can offer us a new path to knowledge and discovery. Instead of competing with AI, we should focus on developing skills that are complementary to it, such as our unique human abilities of creativity, intuition, and empathy. Learning must be a lifelong pursuit, and we must keep up with the latest advancements to stay relevant in the changing landscape of the job market. The ability to adapt and evolve is paramount in this new era, and we should see challenges as opportunities for growth and transformation.

And, finally, in the voice of Mahatma Gandhi

Artificial intelligence is a new force in our world, and while it may bring challenges, we must approach it with a peaceful and positive mindset. Rather than viewing AI as a threat, we should strive to develop skills that complement it, such as our unique human capacities for creativity, empathy, and compassion. Learning should be a lifelong pursuit, and we must stay curious and open-minded to the possibilities of AI. We should collaborate with AI, not compete with it, for only by working together can we achieve true progress. Let us embrace change and be adaptable, for the world is constantly evolving, and challenges are opportunities for growth. With a growth mindset and a spirit of cooperation, we can create a future where AI and humanity coexist in harmony.

Whether perfectly true to the originals or not, these variations were produced in a few minutes, far faster than a human could perform the same task. 

With AI the door is open to producing content in varied voices and perspectives at great speed.

Read full post...

Tuesday, April 04, 2023

2nd Australian Responsible AI Index launched - calls for government to regulate sooner rather than later

Today marked the unveiling of the 2nd Australian Responsible AI Index, accompanied by urgent appeals for the government to intervene and curb the potential misuse of artificial intelligence (AI). 

The Australian Financial Review provided comprehensive coverage of this critical topic, revealing that a mere 3% of Australian companies are managing the adoption and continuous use of AI in a responsible manner.

As AI permeates almost every facet of business operations, it is crucial that its management and regulation extend beyond vendors and IT teams, ensuring responsible policies are in place for both the business and society as a whole.

The Index report disclosed several key findings:

  • The average Responsible AI Index score for Australian organisations has remained stagnant at 62 out of 100 since 2021.
  • While a significant 82% of respondents believe they are adopting best-practice approaches to AI, a closer look reveals that only 24% are taking conscious steps to guarantee the responsible development of their AI systems.
  • There has been a growth in organisations with an enterprise-wide AI strategy linked to their broader business strategy, with the figure rising from 51% in 2021 to 60%.
  • Among those organisations, only 34% have a CEO who is personally committed to spearheading the AI strategy.
  • Organisations with CEO-led AI strategies boast a higher RAI Index score of 66, compared to a score of 61 for those without direct CEO involvement.
  • A total of 61% of organisations now recognise that the advantages of adopting a responsible AI approach outweigh the associated costs.

The Responsible AI Index serves as a timely reminder for the Australian government to act swiftly in the face of these findings, reinforcing the need for a more responsible approach towards AI implementation across the board.

Read full post...

Bookmark and Share