VENDORiQ: AI and the Civilisational Challenge – Dario Amodei at Parliament

AI's rapid evolution demands proactive governance, 'good faith' transparency audits, and agile workforce adaptation to secure democratic stability and economic growth.

On 1 April 2026, Dario Amodei, CEO of Anthropic, addressed the Anthropic Futures Forum at Australia’s Federal Parliament House in Canberra. In a wide-ranging discussion, he explored the trajectory of ‘powerful AI’, the future of the labour market, and the urgent need for a democratic approach to regulation.

The Reality of Powerful AI

Question: You have described the current trajectory of AI as a ‘civilisational challenge’. What are you seeing at Anthropic that makes the arrival of powerful AI a realistic scenario, rather than just hype?

Dario Amodei: I’ve been in this field of AI since 2014… at every stage of this exponential of the increasing intelligence of the technology, there has over and over again been a lot of people… that say, well, you know, it can’t go a step further than this because of this. First, it was… statistical models can’t really fully understand language. Then it was… they can’t really understand what’s going on globally. And it was… they can’t reason. Like, they can’t do math problems or write code. And at every point, there was this very convincing argument that it was absurd that these simple methods would keep improving the models. And yet I’ve watched over and over again that the models actually do get there.

When I look at the more concrete things… I look at, for example, what we’re able to do in coding, where it’s really the case that within Anthropic, many people are barely writing any code anymore. Now, they have plenty of things to do… but there is a real shift here. I think people in the scientific world are starting to get a sense that this isn’t like previous information technologies. There’s more to it. It’s going fast.

Regulating Uncertainty

Question: Public policy experts find it difficult to regulate uncertain, rapidly developing technology with potentially catastrophic costs. How should governments approach this dilemma?

Dario Amodei: I agree entirely with how you’ve laid out the dilemma. About two years ago, a law in California called SB 1047 caused a lot of controversy. This was the first attempt, back in 2024, to regulate the development of frontier AI models. It proposed having these tests you have to run before you release your models… almost like the auto industry, where you have to run all these crash test dummies.

The thing that was difficult and made it so controversial is… the models are getting powerful so quickly. Back in ’24, even more so than today, we didn’t know what the models were capable of and what they were not capable of. It’s very hard to regulate as a prediction. I actually think now, as we’re starting to see more of the cyber and bio-risk, some of the child safety stuff, I think the time for this more auto or air industry type regulation… actually is possible now. But the fundamental challenge remains: we know much less than we would like, the technology is moving faster than we’d like, and so we kind of have to act, but we’re not sure how to act.

The Problem of ‘Black Box’ Models

Question: These models are ‘grown’ rather than built, and we don’t fully understand their internal workings. Should legislation require developers to understand exactly what the tools are doing?

Dario Amodei: We’ve been able to trace neurons inside the model that activate in response to very specific ideas… but all that said, we probably understand now, through these methods, working on them for several years, maybe per cent or 10 per cent of what goes on inside the models. And so we’re very far from it. And if we were to legislate, if we were to say, well, you have to understand what’s going on inside the model, we would face a very real risk that no one can comply with that legislation.

Again, I think we can do reasonable things in the middle. We can say, using the interpretability technologies that are available at the time, you have to make a good faith effort to kind of do an audit of these models using the state-of-the-art, best available interpretability-style x-rays.

Predictions for Employment and the Labour Market

Question: Economists often argue that technology augments humans rather than replacing them, but the speed of this shock is unprecedented. What is your prediction for the white-collar workforce?

Dario Amodei: I agree. The economy has this flexibility… but I think the big problem is the speed at which it’s going to happen. When I look at where we’ve gotten in just three years… that’s just really, really fast. I worry about our ability to adapt fast enough.

It’s interesting to look within Anthropic… I keep saying people are increasingly not writing much code. For now, [we have] more need for software engineers rather than less. But I think in not too long, probably less than a year, even that will go away. And when we think about it, we say, yeah, I think we’re going to need fewer pure software engineers in a year than we have now. But there is this adaptability where what I think we’re going to need a lot more of is hybrid roles… solutions architects. We need tens of thousands of those because the business is growing so rapidly.

If for someone who says, “I’m a pure software engineer, that’s what I do. I sit around, write code all day”. I am worried that person is going to have a bad time. But if you’re saying, “OK, I’m a software engineer. I am open to, in the future, doing a job that’s sort of 50 per cent what a software engineer is doing and 50 per cent something else”. I think I see a bright future for them.

Economic Growth and Shared Benefits

Question: History suggests that technology does not automatically share the benefits of growth. What should society and democracy be thinking about to ensure this wealth is distributed?

Dario Amodei: This is a macroeconomic policy issue… because a lot of it is capital-driven, a lot of it is machine-driven, it won’t go to everyone, it may be more concentrated than it should be. So you need taxes. That’s the simplest approach. And they need to be targeted in some way at those who are making the most money here.

Some kind of tax designed in a sophisticated way to get at the idea that more of the returns are going to capital and less of returns are going to labour to somehow counteract that so that at the end everyone has much more than they would have before. I don’t see any way to escape that basic conclusion.

AI and Global Democracy

Question: You have argued that AI must be developed in democracies. Tangibly, what difference does it make if this technology is in the hands of a democratic government versus an autocracy?

Dario Amodei: If we look at the authoritarian side of things… China [is] a great example of a country that’s going down what I would consider the wrong path. If you augment [a high-tech surveillance state] with powerful AI… you really can go in the direction of a panopticon here.

And then the inverse of that is, can we use AI to enhance the internal constitution of democracies? Can we use it, for example… to combine the human element of justice with some kind of consistency that kind of machines bring? We could really have a renaissance of democratic institutions.

On the international stage, I see it as about military competition. AI is a powerful technology. And I don’t want autocracies to be militarily more powerful than democracies. Soviet Union or Nazi Germany… if you go back in history and make those countries more militarily powerful, good things would not happen. And so I want to make sure that democracies continue to have the upper hand.

Trouble viewing this article?

Search

Register for complimentary membership where you will receive:
  • Complimentary research
  • Free vendor analysis
  • Invitations to events and webinars
Delivered to your inbox each week