An Australian Renaissance Party Discussion Paper
Humans cannot outcompute machines, we cannot out-scale them, nor can we out-connect them. But a society that automates its citizens out of productive life produces anomie. This paper argues that deliberate, efficient intervention is required to maintain the conditions under which human participation in society remains possible, and proposes the policy architecture to achieve it.
1.0 The Asymmetry
The case against human competitiveness in the age of intelligent machines rests on three properties of modern compute that biology cannot match: speed, scale and connectivity. Each is subject to exponential improvement. None is constrained by the biology that constrains us.
1.1 Biological Constants
The human brain operates at roughly 100 petaflops, a figure that has not changed in anatomically meaningful terms for approximately 200,000 years. Cranial volume is bounded by the birth canal; neural signalling speed is bounded by the electrochemistry of the axon; metabolic overhead is bounded by what the body can supply and cool. These are biological constraints not amenable to incremental improvement except over evolutionary timescales. Humans are, in computational terms, a fixed quantity.
1.2 The Unbound Machine
Machines are not so constrained. Processing speed doubles on a cycle measured in years. Storage capacity scales with manufacturing investment. Network bandwidth expands linearly with every fibre laid and every satellite launched. As we described in our earlier paper the cross-brain latency of the human corpus callosum, approximately 37 milliseconds, is slower than the round-trip time for a signal from Canberra to Sydney and back over modern fibre. The same modern fibre links can stream video of a soccer goal from London to Sydney faster than a human spectator can turn to a friend and say the score. This disparity is widening, and it is widening on all three axes simultaneously.
1.3 Imprinting, Deployment, and the Fork Advantage
Each human brain requires at least a decade of imprinting, socialisation and formal education before it becomes economically useful at all, and considerably longer for intellectual and professional utility. A machine learning model can be trained in weeks, deployed in seconds, and cloned in milliseconds: a single trained mind can become a thousand specialised workers simultaneously, each with a different task context, at negligible marginal cost. Humans cannot easily fork. A human mind takes twenty years to reach the starting line, and the knowledge it acquires may be obsolete before it arrives. It is possible that brain-computer interfaces will, in time, extend some of these limits; but that technology remains nascent, raises profound ethical questions of its own, and is beyond the scope of this paper. We are concerned here with humans as they are, not as they may one day be augmented to become.
1.4 Teams and Agentic Systems
A second way that humans have historically transcended individual cognitive limits is task partitioning into teams. Specialisation and coordination allow a group to accomplish what no single mind can due to the time required for training and specialisation. This is, in essence, how civilisation was built. But this paradigm offers no refuge from the present challenge, because it applies equally to machines. Collections of task-focused AI systems, coordinated through shared memory and orchestration layers, are precisely what the field now terms agentic systems. They partition, delegate, and reassemble in the same manner as a human team, but they do so at machine speed, with machine precision, and without the coordination overhead of language, fatigue, co-locality or misunderstanding. The organisational advantage that once belonged exclusively to human groups is now replicable in silicon at a scale and tempo that no human organisation can match. Humans can participate in these hybrid teams, and for a time they will. But the iso-surface that describes the totality of human ability is finite, and as machine capability expands it will be surpassed in every measurable dimension. The one faculty that may remain exclusively ours is consciousness, the subjective experience of being. Unfortunately, consciousness has no defined economic value. We contend, however, that it has infinite sociological value, and it is on this contention that the case for intervention ultimately rests.
1.5 Deterministic Quality and the ISO Selection Pressure
There is a further asymmetry that compounds the competitive disadvantage. A trained model, given identical input, can produce identical output. Techniques such as deterministic decoding and output deduplication ensure that once a process is validated, it can be reproduced with zero variance across millions of instances. This is precisely what industrial quality frameworks demand. W. Edwards Deming defined quality as sameness: the reduction of variation in output to the minimum achievable level. ISO 9000 and its descendants encode this principle into the fabric of global supply chains. Human workers, by their nature, introduce variance. They fatigue, they err, they interpret. Machines, configured for deterministic inference, do none of these things. The consequence is that any production or service process governed by quality standardisation will, all else being equal, preferentially select AI systems over human labour. The market does not need to intend displacement; the quality framework mandates it.
1.6 The Arithmetic of Convergence
For any task whose value is a function of speed, consistency, scale, or connectivity, a sufficiently capable machine will eventually perform it more cheaply and more reliably than a human being. An exponentially improving capability approaching a fixed one must, for any given threshold, eventually surpass it. The only variables are time and economic incentive.
Our 2018 submission to the Senate Select Committee on the Future of Work and Workers framed this as the convergence of an unbound trend upon a fixed level and predicted that a fully human-substitutable machine mind would emerge within a decade. Eight years on and two years ahead of our prediction, we have been substantially validated. Large language models now draft legal arguments, generate medical diagnoses, write functional software, and produce financial analysis at professional grade. Computer vision systems operate vehicles and interpret radiology. Agentic architectures coordinate multi-step reasoning across domains that were, at the time of that submission, considered decades away from automation. The gap between what machines can't yet do and what humans can do is narrowing faster than any reskilling program, any educational reform, or any workforce policy can compensate for.
This is the structural reality that any serious policy must confront. Humans will not outcompute machines. We will not out-scale them. We will not out-connect them. The contest, on its own terms, is already decided. The question is not whether machines will surpass us in productive capacity, but what we intend to do about it.
2.0 The Necessity of Participation
If the argument ended with computational asymmetry, the policy response would be simple albeit brutal: let substitution proceed, redistribute the surplus, and accept that human labour is a relic. Martin Ford (Rise of the Robots), Guy Standing (Basic Income), and to varying degrees Erik Brynjolfsson and Andrew McAfee (The Second Machine Age) have proposed versions of exactly this. They are wrong, and they are wrong for reasons that economics alone cannot illuminate.
2.1 Income Without Purpose
Human beings are not optimisation functions. We are social animals whose psychological architecture was shaped by 300,000 years of cooperative survival. Work, in the broadest sense, is the primary mechanism by which individuals establish identity, contribute to community, and derive a sense of purpose. One of the first questions people ask each other is "What do you do?". The clinical literature on unemployment is unambiguous on this point: sustained exclusion from productive activity is associated with depression, substance abuse, family breakdown, and elevated mortality. These are not secondary effects of income loss; they persist even when income is maintained through transfers. It is the exclusion itself that does the damage. The historical evidence suggests that the primary beneficiaries of large-scale idle income are the gambling, alcohol, and entertainment industries, not the individuals directly receiving it.
2.2 Child-Rearing as Productive Work
There is one domain of human labour that no automation thesis can render obsolete: the bearing and raising of children. The division of labour within family structures is not arbitrary; it reflects deep evolutionary pressures. For most of our species' history, cooperative child-rearing, provisioning, and protection formed the nucleus of social organisation. These roles carried real economic and survival value. The reason this work has never been properly valued in monetary terms is likely structural rather than cultural. Currency evolved as a medium for inter-tribal exchange, trade between strangers who had no reciprocal obligation to one another. Intra-tribal exchange, the cooperative labour of the family and clan, operated on reciprocity and shared survival; it had no need for monetary abstraction. When modern economies inherited this framework and defined productive work as work that generates monetary exchange, they systematically excluded the very activities that held societies together from within. It is a blind spot built into the architecture of money itself.
Child-rearing is among the most demanding, skilled, and socially consequential forms of work that exists, and it is overwhelmingly unpaid. If we are serious about the proposition that societal participation has intrinsic value, then child-rearing must be recognised as a productive occupation and compensated accordingly. A society that pays machine operators but not the people who produce and shape its next generation has its accounting backwards.
2.3 The Lesson of Deindustrialisation
A society that automates its productive base without providing meaningful avenues for human participation does not produce a leisured utopia. It produces anomie. The historical precedents are instructive. Communities built around single industries that collapsed, the coal towns of Wales, the steel towns of the American Midwest, the textile towns of northern England, did not transition gracefully into post-industrial fulfilment. They fell into generations of social decay that no amount of welfare could arrest, precisely because welfare addresses income but not purpose.
2.4 The Policy Implication
The implication for AI policy is fundamental. Any transition framework that treats displacement purely as an economic problem to be solved with redistribution will fail. Participation in the productive life of a society is, for the vast majority of human beings, a psychological necessity. The policy challenge is therefore not merely to maintain incomes in the face of automation but to maintain the conditions under which human engagement in society remains possible, purposeful, and real.
3.0 The Thumb on the Scale
3.1 The Case for Intervention
If machines will inevitably outperform humans on the metrics that markets reward, and if human participation in productive life is a psychological necessity that markets will not spontaneously preserve, then the scales must be deliberately weighted. Policy must intervene, to ensure that the competitive equation retains a place for human contribution. It is neither protectionism nor Luddism. It is the recognition that an unmanaged market will optimise humans out of the productive loop, and that the social consequences of doing so are catastrophic. The intervention must be structural, permanent, and designed with the same rigour that engineers apply to safety margins: not because the bridge will certainly fail, but because the cost of failure is unacceptable.
3.2 Precedent
The Amish have asked this question of every technology for three centuries. Each innovation is assessed against a single criterion: will it strengthen or weaken family structure, community interdependence, and shared purpose? Their communities have low rates of depression, strong social cohesion, and economic self-sufficiency, outcomes that elude many far wealthier societies. We are not proposing such a cultural withdrawal; a nation-state cannot opt out of the global economy as a religious community can. But we are proposing that the same evaluative logic be applied through policy at scale: not can this technology replace all human economic participation, but should it, and if so, what structures must exist to preserve the social participation that the displacement removes?
3.3 The Efficiency Constraint
The critical constraint is efficiency. An intervention that preserves human participation at the cost of crippling the productive gains of automation defeats its own purpose. The surplus generated by intelligent machines is precisely what makes a managed transition possible. Squander that surplus on inefficient make-work or bureaucratic overhead, and the resources for genuine human engagement evaporate. The thumb must be pressed on the scale, but it must be pressed precisely.
3.4 Comparative Advantage in the Age of Machines
Not all economic activity is equally subject to technological displacement. The sectors most resistant to substitution are those where human presence, judgement, or relationship is the service: care work, education, counselling, the trades that require physical adaptability in unstructured environments, community governance, and as argued above, child-rearing. It is far more efficient to encourage employment in industries where humans retain a durable comparative advantage than to defend positions in sectors where the economics of substitution are overwhelming. The question is how to identify these domains and direct labour toward them without resorting to central planning, which has its own well-documented failure modes. The answer is to let the market do what markets do best: discover value. We propose three interlocking mechanisms to achieve this.
3.5 The Compute Tax
If it is compute that directly competes against the human brain, then it is compute that should bear the tax. We propose a levy on computational capacity, applied at two points of entry. First, GPU and accelerator hardware imports would attract a tariff, priced to reflect the displacement potential of the processing power they represent. Second, the output of foreign-sourced AI services, the tokens generated by large language models hosted overseas, would attract a small per-token levy, collectable by the provider in the same manner as GST. Compute hardware or AI services exported from Australia would receive offsetting credits, encouraging the development of sovereign AI capability and ensuring that the tax penalises the import of displacement, not the export of Australian innovation.
The revenue raised by this tax is not hypothecated to welfare. It is directed to a single purpose: reducing the cost of human employment.
3.6 The Negative Payroll Tax
The compute tax revenue funds a reduction in, and where warranted an inversion of, payroll tax. By lowering the effective cost of hiring a human being, the negative payroll tax directly counteracts the price advantage that automation holds in marginal cases. It does not attempt to make humans cheaper than machines in every domain; that would be neither possible nor desirable. It tilts the equation in sectors where the margin is narrow, making human employment the economically rational choice in a broader range of circumstances than an unmanaged market would produce.
3.7 Floor and Trade
The second mechanism operates at the industry level. For each ANZSIC industry classification, a human participation floor is established, fixed to 2025 headcount-to-revenue ratios as the baseline: a minimum below which an enterprise may not fall. An employer that wishes to reduce its human workforce below this floor, for reasons other than safety, does not face a prohibition. Instead, it must purchase human employment credits from another enterprise that exceeds its own floor. The credits are tradeable, and their market price is set by supply and demand.
This is structurally analogous to carbon trading, but with labour as the unit of account. Industries that are natural sinks for human employment, those where human presence adds value and substitution is difficult, will generate surplus credits and profit from doing so. Industries that are natural sources of displacement will purchase credits, and the cost of those credits will be factored into their automation decisions. The market, not a central planner, determines where human labour is most productively preserved.
Capitalism is thereby put to work finding where humans can work. The thumb presses on the scale; the market decides where the weight falls.
3.8 The Right of Human Election
Any person interacting with an automated system in a commercial or governmental capacity must have the right, at any point in the interaction, to elect transfer to a human representative. This right must be immediate, unconditional, and fulfilled within a defined maximum response time — we propose no more than two minutes for telephony and no more than five minutes for text-based channels.
This is not a customer service standard; it is a structural requirement of the participation framework. If the policy objective is to preserve human engagement in the productive economy, then the most visible point at which that engagement occurs — the interface between an organisation and the people it serves — cannot be permitted to become fully automated by default. The right of election ensures that human roles persist not merely as back-office accounting entries but as accessible, functioning points of contact in the daily experience of citizens.
Where a dispute arises from any automated decision — a claim denied, an application rejected, a service withdrawn — the affected party must have recourse to a human arbitrator empowered to review and override the machine's determination. This is not an appeal to a second algorithm. It is the right to have a human being, exercising human judgement, examine the substance of the case. The arbitrator must be independent of the automated system that produced the original decision and must render a determination within a defined timeframe. Automated efficiency is not a licence to automate accountability.
This obligation carries a dual benefit. The mandate to maintain a minimum level of human staffing to enact conflict resolution in a timely manner creates a direct economic incentive for enterprises to improve the quality of their automated systems. Every interaction that escalates to a human representative is a cost. Businesses will invest in better AI precisely because they wish to reduce the number of escalations, and the customer experience improves as a consequence. The human fallback does not merely catch failures; its existence pressures the system to produce fewer of them.
The enforcement mechanism is straightforward. Any enterprise that deploys an AI system in a customer-facing or citizen-facing capacity must maintain sufficient human staffing to meet both the response-time guarantee and the arbitration obligation. Failure to do so constitutes a breach, subject to the same reporting and penalty framework as other consumer protection obligations. The cost of maintaining this capacity is, by design, a cost of automation: if a firm wishes to automate its front line, it must retain a human rear guard.
The obligation applies to all sectors, but with particular force in essential services: banking, insurance, healthcare, telecommunications, utilities, and government. In these domains, the person on the other end of the line is not a consumer exercising a preference; they are a citizen exercising a right. No algorithm should stand as the sole intermediary between an Australian and access to the services that define participation in modern life, and no algorithm should be the final word when that access is denied.
3.9 Safety Exemptions
Automation that demonstrably improves human safety, such as mining robotics, autonomous hazard response, or industrial process control in dangerous environments, may be exempted from both the compute tax and Floor-and-Trade obligations by application. The objective of this framework is to preserve human participation in productive life, not to preserve human exposure to risk.
3.10 Expanding the Economy without the Population
The safety exemption is also a growth strategy. One way to offset any economic friction introduced by the human participation framework is to greatly expand the total size of the economy. Australia is uniquely positioned to do this in three sectors where automation, infrastructure investment, and sovereign resources converge.
The first is agriculture. Australia possesses vast arable and pastoral land, much of it underutilised, in a world that will need dramatically more food. Climate instability, population growth, and the degradation of agricultural land elsewhere will make reliable food supply a matter of global strategic importance within a generation. Large-scale irrigation projects, such as the expansion of the Ord River scheme and the development of underutilised northern water resources, can bring vast tracts of currently unproductive land into cultivation. Paired with automated broadacre farming, precision agriculture, and robotic harvesting, these investments can scale Australian food production far beyond what human labour alone could achieve, and every tonne exported generates revenue that funds the domestic human participation framework.
The second is mining. Australia is already a global leader in both mineral extraction and mining automation, with autonomous haulage, remote drilling, and integrated mine management systems deployed at scale across the Pilbara and elsewhere. The nation sits on some of the largest deposits of critical minerals on earth: lithium, rare earths, iron ore, copper, the very materials that the global energy transition and the AI hardware supply chain depend upon. Extending this existing lead through further automation is safer, more efficient, and capable of operating continuously in conditions too dangerous or remote for human workers. And even a fully automated mine does not exist in isolation. Every extraction operation generates an extensive logistical tail: supply chains, equipment maintenance and repair, land management and environmental remediation, housing and community infrastructure for support personnel, fly-in fly-out services, airstrips, medical facilities, and the administration that binds them together. These are roles that resist full automation precisely because they are varied, context-dependent, and embedded in physical communities. The expansion of sovereign mining capacity, exempt from Floor-and-Trade obligations on safety grounds, generates the export surplus and tax base upon which the rest of the framework depends.
The third is energy. Both automated agriculture and automated mining are energy-intensive at scale, and Australia's current dependence on imported fuel, made painfully visible by the latest round of global oil supply restrictions, represents a strategic vulnerability that no amount of economic expansion can offset if the power supply is not sovereign. Australia holds some of the world's largest reserves of thorium, a fuel suitable for molten salt reactors that China has already begun demonstrating at operational scale. Thorium reactors offer high energy density, passive safety, minimal long-lived waste, and independence from the global uranium enrichment chain. A national commitment to thorium energy would simultaneously power the automated expansion of agriculture and mining, eliminate fuel import dependency, and constitute a mega-project in its own right, generating decades of skilled employment in construction, engineering, and operation.
In all three cases, the logic is the same: automate where it is dangerous or where it expands national productive capacity, secure the energy base that makes expansion possible, and redirect the surplus to sustain human employment where it matters most.
3.11 Work-Sharing and the Shorter Week
In the longer term, a further mechanism becomes available: the redistribution of work itself through a shorter working week. If the total demand for human labour contracts even as the economy grows, the remaining work can be distributed more broadly by reducing the standard working week to, say, three days per worker. A six-day operating week with two shifts doubles the number of people engaged in each workplace without reducing the enterprise's productive hours. The effect is immediate job-sharing: each position sustains two households instead of one, and twice as many individuals retain the psychological and social benefits of productive participation.
This is not a novel idea; it echoes the transition from the six-day to the five-day working week that accompanied earlier waves of mechanisation. The difference is that in the age of intelligent automation, the reduction may need to be deeper and more deliberate. The negative payroll tax mechanism described above makes this transition economically viable: if the cost of employing a human is subsidised, the marginal cost of splitting a role between two workers is manageable, and the social return is substantial.
3.12 The Virtuous Circle
This produces a counterintuitive but critical insight: economic expansion through automation should be a policy priority, not an obstacle to be managed. The larger the economy grows, the smaller the compute and ancillary taxes need to be as a proportion of output to maintain the human participation framework. A two-trillion-dollar economy requires a heavier tax to fund the same level of human employment subsidy than a four-trillion-dollar one. Moreover, a larger and more diverse economy generates more niches, more edge cases, more roles where human judgement, presence, or creativity is genuinely valued. Economic expansion does not merely fund participation; it creates the conditions in which participation arises naturally. The framework does not shrink the economy to protect humans; rather it grows the economy through automation in order to fund and expand their participation somewhere.
4.0 Conclusion
The argument of this paper rests on three propositions. First, that the computational asymmetry between human biology and modern machines is structural, widening, and ultimately decisive for any task whose value is a function of speed, scale, and connectivity. Second, that human participation in the productive life of a society is a psychological necessity, not a luxury, and that no amount of redistributed income can substitute for it. Third, that if the market left to itself will optimise humans out of the productive loop, then policy must deliberately weight the equation in their favour, and must do so efficiently enough to preserve the surplus that makes the intervention possible.
We have proposed a framework of interlocking mechanisms to achieve this: a compute tax that captures the productivity surplus of automation at its source; a negative payroll tax that redirects that surplus into reducing the cost of human employment; a Floor-and-Trade system that uses market pricing to discover where human labour is most productively preserved; safety exemptions that encourage automation where it protects human life; and a national commitment to expanding the economy through agriculture, mining, and sovereign energy so that the framework funds itself at diminishing cost.
None of this requires hostility to technology. The machines are coming, and we think they should come. Let them grow the economy, eliminate dangerous work, and generate wealth on a scale that previous generations could not have imagined. The question has never been whether to permit this transformation, but whether to manage it. An unmanaged transition will concentrate wealth, hollow out communities, and produce a generation without purpose. A managed one can do what the previous industrial revolution did not: distribute the gains before the damage is done.
The scales will tip. Lets put a friendly thumb on the weight.