Skip to content

Australian Renaissance Party

A necessary political movement

Critique of the 2025 National AI Plan

A detailed analysis of the Australian Government's 2025 National AI Plan by the Australian Renaissance Party, identifying nine fundamental weaknesses in the government's approach to AI-driven workforce displacement.

…And was Jerusalem builded here, Among these dark Satanic Mills — William Blake, c. 1806


1.0 Preamble

It is hard to imagine what Blake would have thought looking at the glass towers and clean skies of a modern Australian city. He wrote those words forty years after the Industrial Revolution had begun; near the middle of the chaos and social dislocation that the new steam technology had wrought. Immiseration, squalor, overcrowding and disease were rampant. Men, women and children averaged fifteen-hour days, six days a week. Life expectancy declined. The first phase of the Industrial Revolution was not kind to those who lived through it; nor may this next phase be kind to us. Transitions are hard, particularly when not managed.

History remembers the outcome of the Industrial Revolution (the rise of the middle class, increased longevity, modern medicine) but glosses over the process: three generations of immiseration before policy and labour rights caught up to technology.

The lesson of the Industrial Revolution is that while technological change may be inevitable, its social toll can be a policy choice.

People regard ChatGPT today with the same curious engagement as the peasants who marvelled at the smoke-belching, steam-whistling, iron-wheeled tractor the landlord had just acquired. The Australian Government has similarly responded with its 2025 National AI Plan. This paper examines that response and explains why it is inadequete and will fail.

For the theoretical underpinnings and technical basis of our critique, the interested reader should consult our original 2018 paper, "The Luddite Fallacy Fallacy," submitted to the Senate Select Committee on the Future of Work and Workers, and referenced throughout.

The Australian Renaissance Party exists because incumbent governments have not addressed the predicted hard landing. Without intervention, Australians face immiseration and dislocation of a kind not seen since the conditions Blake described two centuries ago.


2.0 Critique of the 2025 National AI Plan

In 2025, the Australian Government released its National AI Plan, establishing a whole-of-government framework that positions AI as a core pillar of the 'Future Made in Australia' agenda. The plan acknowledges the significance of AI by outlining a strategy to build sovereign capability, mandate worker consultation to protect jobs, and establish an AI Safety Institute to manage emerging risks.

The 2025 National AI Plan is fundamentally reactive; it fails to account for the accelerating trajectory of AI or the structural mechanisms of its entry into the labour market. Our principal concerns are as follows.

2.1 The Fallacy of "Job Protection" through Reskilling

The National AI Plan's primary response to labour disruption, reskilling and "lifelong learning", is a reactive measure that fails to address the root cause of displacement. The plan treats AI as a tools-based transition similar to the move from paper to spreadsheets. AI is not merely a tool; it is operationally infective or as economists say it is a General Purpose Technology, a creative-destructive force.

By entering the workforce at the task level, AI integrates into existing systems as a "complement." This integration masks the immediate loss of headcount while creating an irreversible dependency. Once a task is algorithmically managed, the economic and operational justification for a human learner to perform that task is permanently eliminated.

The plan's own existence betrays the flaw in its logic. Three years ago, large language models were a research curiosity. The rise of ChatGPT and its successors was sufficiently rapid that even ardent sceptics of AI displacement were compelled to take notice. The 2025 Plan is the government's silent admission that the technology moved faster than anticipated; that the existing workforce is already behind and 'Reskilling' is the proposed remedy.

The "Learn to code" and STEM pushes of the 2012–2020 era were the previous iteration of this same reflex. Governments and industry leaders urged displaced workers to retrain into software development, data science, and engineering, the fields then considered immune to automation. By 2024, AI code generation tools had reduced entry-level programming to a commodity; by 2025 agentic systems were writing, testing, and deploying production software with minimal human oversight. The very skills that were prescribed as the cure are now among the first to be automated. The 2025 Plan repeats this error without acknowledging that its predecessor failed.

If three years of progress was sufficient to render the workforce unprepared, on what basis does the government assume that a reskilling program designed in 2025 will remain relevant by 2028? The trajectory is not levelling off; it is steepening. The capabilities that prompted this plan will be the baseline within three years. A worker retrained today for the AI landscape of 2025 will find that landscape unrecognisable before their new credentials are issued. The plan assumes a stable destination to reskill toward. There is no such destination. It is aiming at where the technology was, not where it is going.

2.2 The ISO-9000 Driver: The Industrial Demand for "Sameness"

A critical oversight of the 2025 Plan is its failure to identify ISO 9000 and related quality standardisation frameworks as primary catalysts for AI adoption and automation. Modern industry demands a level of consistency and repeatability that human variance cannot match.

Machines and AI models are purpose-built for the high-fidelity repetition and fine tolerances required by global supply chains. By ignoring this driver, the government's plan focuses on "innovation" while the market is actually optimising for the systemic removal of human idiosyncrasy from the production and service cycles.

2.3 The "Junior Wall": Barriers to Entry for the Next Generation

The National AI Plan highlights the "Next Generation Graduates Program" but ignores the structural "Junior Wall" increasingly being created by task-level automation. Entry-level positions are traditionally defined by well-documented, repetitive tasks. Because these tasks are well documented they are the easiest to automate, AI effectively "blocks" the entry point of professional careers. You can now literally instruct an AI to read the manual.

This creates a paradox: the plan improves the productivity of incumbents while pulling up the ladder for new hires, leading to a long-term atrophy of sovereign Australian talent.

2.4 No Theory of Displacement: The "Tasks Not Jobs" Blind Spot

The most consequential omission in the 2025 Plan is its lack of any structural theory of how AI displaces labour. The plan acknowledges, citing Jobs and Skills Australia, that "AI reshapes tasks rather than entire jobs." It then proceeds as though this observation requires no further analysis.

Our 2018 submission to the Senate Select Committee on the Future of Work and Workers made this mechanism the centrepiece of its argument. Substitution does not begin with redundancy. It begins when a machine performs a task within an occupation more cheaply, more reliably, or at greater scale than the person who formerly performed it. The occupation persists in name; the substance is hollowed out. A profession can lose the majority of its productive content while headcount statistics remain stable, because what remains is residual coordination, compliance, or client-facing activity that has not yet been automated.

The 2025 Plan measures the labour market by headcount. It counts jobs. It does not measure the economic content of those jobs, the proportion of each role that has migrated into software, or the rate at which that migration is accelerating. A nation can report full employment while the productive substance of that employment has quietly transferred to machines. By the time displacement registers in headline unemployment figures, the structural damage, the loss of entry pathways, the erosion of professional depth, the concentration of remaining value into a shrinking technical elite, is already entrenched.

We identified this pattern eight years ago. The 2025 Plan has not absorbed the lesson. Its entire framework of "reskilling" and "workforce mobility" presumes that displacement is visible and countable; that a displaced worker can be identified, retrained, and redirected. Task-level substitution does not work this way. It is gradual, distributed, and largely invisible to conventional labour statistics until the cumulative effect is irreversible. This resolves Solow's paradox: "you can see the computer age everywhere but not in the productivity statistics".

The plan compensates for this absence of structural understanding with a proliferation of named programs and institutional titles: the Next Generation Graduates Program, the AI Safety Institute (AISI), the National AI Adoption Taskforce, Jobs and Skills Councils. These are honorifics, not instruments. They convene, consult, and publish guidance. None has the authority, the mandate, or the metrics to alter the trajectory of displacement they were ostensibly created to address.

2.5 Voluntary Compliance and the Absence of Enforceable Guardrails

The plan's governance model rests overwhelmingly on voluntary measures. The National AI Centre's "Guidance for AI Adoption" offers six essential practices. The "Being clear about AI-generated content" framework recommends labelling, watermarking, and metadata recording. Industry is invited to self-regulate. Mandatory obligations are deferred to existing regulators, who are left to "identify and manage harms and report any gaps in laws to the AISI."

The plan itself introduces no new enforceable obligation on any firm adopting AI. No disclosure requirement for task-level automation. No mandatory reporting of workforce composition changes driven by AI integration. No threshold at which consultation becomes compulsion. The entire structure is a governance framework built on goodwill and hand holding, deployed into a domain driven by competitive pressure.

Our 2018 paper argued that the logic of substitution is economic before it is ethical: firms adopt AI because it is cheaper, faster, and more reliable, not because they wish to displace workers. Voluntary restraint asks firms to act against their competitive interest. History offers no example of an industry voluntarily constraining its own efficiency gains at scale. The plan's reliance on voluntarism is not a governance strategy; it is the absence of one.

Compounding this, the plan defines no metric for success. There is no target for workforce displacement to remain below, no timeline for review, no threshold that would trigger escalation from voluntary measures to mandatory ones. A governance framework that sets no criteria by which it can be judged to have failed is, by design, unfalsifiable. It is not a plan; it is a posture.

2.6 Infrastructure as Dependency, Not Sovereignty

Action 1 of the plan celebrates over $100 billion in announced data centre investment from Amazon, Microsoft, and Firmus as evidence of Australia's strategic positioning. The plan treats the presence of foreign-owned compute infrastructure on Australian soil as equivalent to sovereign capability.

It is not. Sovereignty over AI systems requires sovereignty over the physical layer on which they run: the compute, the storage, the networking fabric, and the energy that powers them. A nation that hosts foreign-owned data centres is a tenant, not a landlord. The terms of access, the priority of workloads, and the strategic direction of that infrastructure remain in the hands of the owner. When strategic interests diverge, as they inevitably do between nations, the host country discovers the difference between attracting investment and building capability.

There is a deeper irony. The compute housed in these foreign-owned facilities is the very infrastructure on which task-level substitution runs. The models that will displace Australian accountants, paralegals, radiologists, and logistics coordinators will execute on servers owned by American hyperscalers, built on Australian land, drawing Australian power. The plan celebrates the construction of the machinery of displacement as an economic win, without pausing to consider who benefits when that machinery is switched on.

The plan conflates these two objectives. It counts foreign capital expenditure as national strength without addressing the structural dependency it creates. Australia's 2018 experience with submarine cable routing, and the broader Five Eyes debate over Huawei infrastructure, demonstrated that physical infrastructure is strategic infrastructure. The same logic applies to compute. The plan does not appear to have learned this lesson.

2.7 The Energy Contradiction

The plan endorses the G7 Energy and AI Work Plan and notes that data centre electricity consumption across the National Electricity Market stood at approximately 4 TWh in 2024, roughly 2% of grid-supplied power. It then acknowledges, citing AEMO, that this demand is expected to triple by 2030. In the same breath, it asserts that this growth will "promote investment in renewable energy and maintain affordable energy for households and businesses."

This is an assertion without mechanism. A tripling of data centre demand represents a significant new baseload load competing directly with household and industrial consumers on a grid already under transition and climate stress. The plan offers no modelling of the price impact, no priority framework for when AI compute demand and household affordability conflict, and no contingency for the scenario in which renewable capacity does not scale fast enough to absorb both existing demand growth and the new AI-driven load.

The assumption that the interests of data centre operators and Australian households will naturally align is precisely the kind of optimistic non-planning that characterises the document as a whole. Energy is a finite resource at any given point in time. Allocating it requires choices. The plan declines to acknowledge that any choice is required.

The plan's implicit reliance on renewable expansion compounds the problem. In a hotter, wetter climate characterised by increasing cloud cover, extreme weather events, and shifting seasonal patterns, solar and even wind generation becomes less consistent precisely when demand is rising. A national AI infrastructure strategy that depends on variable renewables for baseload compute is building on sand. We recommend that Australia pursue nuclear energy, specifically thorium molten-salt reactors. Australia holds some of the world's largest thorium reserves. It has vast, geologically stable desert suitable for reactor siting. And the technology is no longer theoretical: China's thorium molten-salt reactor at Wuwei achieved criticality in 2023, demonstrating commercial viability. Nor do fossil fuels offer a credible fallback. Oil and gas supply lines are subject to disruption by war or political malfeasance, as the ongoing Strait of Hormuz crisis demonstrates. A nation that underwrites its AI compute infrastructure with energy sources that can be interdicted by a single regional actor has not solved its sovereignty problem; it has relocated it. Moreover, continued reliance on hydrocarbons exacerbates the very climate instability that undermines renewables in the first place, a feedback loop the plan does not acknowledge.

A sovereign, high-density, weather-independent, and supply-chain-independent energy source is not a luxury in this context; it is a prerequisite for any credible AI infrastructure strategy.

2.8 The Missing Distributive Question

The Ministers' Foreword declares that "every person benefits from this technological change." The plan offers no mechanism by which this aspiration becomes reality.

Between this aspiration and its realisation, the plan interposes no causal mechanism. The implicit logic is: AI will increase productivity; productivity will generate growth; growth will create jobs; jobs will distribute wealth. Each link in this chain is, at best, conditional, and the chain as a whole requires precisely the kind of equitable distribution that has not characterised any previous technological transition without deliberate policy intervention. The plan, in effect, relies on exactly the outcome it was written to ensure.

There is no taxation framework for automated output. No consideration of how AI-driven productivity gains flow: to capital, to consumers through lower prices, or to the workers whose tasks have been absorbed. No wealth-sharing instrument of any kind. No discussion of whether existing distributive mechanisms (wages, taxation, transfers) remain fit for purpose in an economy where productive power increasingly resides in code and capital rather than labour.

The closest the plan comes is the assertion that AI will "create secure, well-paid jobs in future industries." This assumes that the market, left to its own devices, will distribute the gains of automation equitably. Our 2018 paper argued the opposite: that the economics of AI are inherently concentrating, because software scales at near-zero marginal cost while labour does not. The gains accrue to the owners of the system, not to the workers it displaces. The plan provides no evidence for its assumption and no fallback if it proves wrong. For a document that claims to be a "whole-of-government framework," this is a remarkable omission.

2.9 SME Exposure: The "Adopt or Die" Gap

Action 4 promotes the scaling of AI adoption across the economy, with particular attention to small and medium enterprises through the National AI Adoption Taskforce and government procurement incentives. The framing is uniformly positive: adoption is presented as opportunity.

The economics of AI are hyperscalar. Marginal cost approaches zero at scale: a model trained once can serve a million customers at negligible incremental expense. This confers a structural advantage on large firms that is qualitatively different from previous economies of scale. The result is an accelerated predator dynamic in which large, AI-equipped firms absorb the market share of smaller competitors who cannot match their cost structure or service speed. The downstream effect is income polarisation: returns concentrate among those who own or operate AI systems, while earnings compress for those competing against them with human labour.

The plan does not address the competitive squeeze that AI creates for firms that cannot adopt. When a large competitor automates its service delivery, supply chain, or back-office functions, the small firm relying on human labour faces a cost differential it cannot close through effort or skill alone. The transition cost of AI adoption, the licensing fees, the integration work, the retraining, the restructuring of workflows, may itself be fatal for businesses operating on thin margins.

The consequence is consolidation. AI does not merely improve productivity; it raises the minimum viable scale of competitive operation. The plan treats adoption as uniformly beneficial without acknowledging that for a significant portion of Australian small businesses, the choice is not between adopting AI and falling behind; it is between adopting AI at ruinous cost and being eliminated by competitors who already have. This dynamic concentrates economic power further, reduces local employment diversity, and weakens the very regional and community economies the plan elsewhere claims to support.


3.0 Proposed Policy Alternatives: The Australian Renaissance Framework

To move beyond the reactive nature of the current plan, we propose a transition to a system where AI provides the data-driven analysis for policy.

The rise of conversational AI has served as a general demonstration of capability. Yet this visible surface belittles the massive computational, storage and networking capacity that underpins it. AI is not going to slow down; the triad of speed, scale and connectivity is what drives the encapsulation of models that now manifest as large language and vision models. The trajectory has no foreseeable ceiling.


4.0 The Substitution Problem

The core of the issue is substitutability. In the same manner that human physical labour was substituted beginning two hundred years ago, tasks requiring mental and physical labour are now being targeted for substitution. As it was then, this substitution need not encompass the full range of what a human worker is capable of doing; only that which is economically valuable and fit for task. In some cases, such as mining, valid safety concerns will almost certainly mandate their introduction.

There will be two distinct but overlapping phases to this substitution. The first is already underway: collections of task-focused AI systems, now termed agentic systems.


5.0 Conclusion

The 2025 National AI Plan is not a negligent document. It is a competent articulation of a government that has recognised the significance of artificial intelligence and wishes to be seen responding to it. Its authors have consulted widely, catalogued existing programs, and expressed the right aspirations. The plan fails not for lack of effort but for lack of depth and by being far too late.

It offers reskilling for a workforce whose skills will be obsolete before the training is complete. It offers voluntary guardrails in a market where competitive pressure makes compliance irrational. It celebrates foreign infrastructure investment without recognising that the compute it welcomes will execute the displacement it claims to prevent. It promises affordable energy while tripling demand on a grid it has no plan to stabilise. It declares that every Australian will benefit while providing no mechanism to distribute the gains. And it encourages small businesses to adopt the very technology that will consolidate them out of existence.

Each of these failures shares a common root: the plan has no theory of how AI enters the economy. It does not understand task-level substitution. It does not measure the hollowing of occupations. It does not model the speed of capability growth. It treats artificial intelligence as a new gadget to be deployed rather than a disruptive general purpose technology, still in its infancy, to be managed!

We wrote our first warning in 2018. At the time, large language models did not exist in their current form. The argument did not depend on them; it depended on the economics of substitution, which are older than any particular technology. Seven years later, the technology has arrived with a speed that has vindicated every concern we raised, and the government's response is a plan that would have been inadequate in 2018, let alone in 2025.

The Australian Renaissance Party does not oppose artificial intelligence. We oppose the failure to prepare for it. The social toll of technological change is a policy choice. The 2025 National AI Plan has made its choice: optimism without structure, aspiration without mechanism, consultation without compulsion. Australians will bear the cost of that choice, as they have borne the cost of every previous failure to manage industrial transition.

Blake's dark satanic mills were not evil in themselves. They were the engines of a prosperity that took three generations to arrive. The question was never whether the mills should be built, but whether the people who lived beside them would survive long enough to see the benefit. That question has returned. The government's answer, so far, is not sufficient.