It’s a good time of year to look back at the bigger questions facing us. So … AI it is! Here some of my current thoughts, mostly so that I can look back in five years time and laugh at how terribly naive I was / we were.
The paper Situational Awareness – The Decade Ahead paints an extraordinary picture of the next decade, one where AI transforms almost every aspect of society at a breakneck pace. It’s incredibly breathtaking in scope and implication, and well worth a read, as well as provoking the question as to whether change can really happen as rapidly as it claims. Let’s ask that question, and propose a “Societal Speed Limit” which I think will be the ultimate decider of the pace of AI-driven change.
Three points from the paper stood out most strongly for me:
1. Credible projections of an incredibly fast pace of improvement
The paper forecasts AI capabilities to advance at an astonishing rate, driven by improvements in hardware availability (compute), algorithms, and deployment methodologies (e.g., agentic tools). Together, this could give up to 10 orders of magnitude (i.e., ten billion times more capable AIs) over less than a decade—a staggering figure. Given that AI already exceeds human-level capabilities in many narrow-defined areas, this would inevitably change the world.
2. The Adoption Curve: Slow, Then Sudden
AI tools today often stall at the proof-of-concept stage, requiring integration and adaptation by organisations. But the emergence of human-level agents that can directly use existing tools without integration effort could act as a tipping point: “hire” an AI in the same way, using the same tools, as a human hire would use. This would immediately make most PoCs irrelevant, and make far more human roles open to AI improvement / replacement.
3. The Geopolitical Frame
The paper spends a lot of time on U.S.-China competition, arguing that AI leadership could define not just economic success but also military dominance. While this might be geopolitically accurate, it feels to me a bit sad that the focus moves so quickly to this specific great power competition of this point in time, given AI’s broader historical importance. This is possibly a pivotal point in the history of our species, or even life on earth! It’s a bit like imagining that the invention of a usable numerical systems was primarily about ancient Sumerian-Babylonian competition.
Where I agree
- No ceiling in sight: Some suggest that AI is plateauing. This feels incredibly ambitious to claim, given that we’re barely more than two years into the post-ChatGPT world, and already far beyond the capabilities of ChatGPT 3. Every week is still bringing breakthroughs.
- Cost as a non-constraint: Yes, AI is (arguably) expensive. But, for example, the costs of specific OpenAI capabilities have come down by ~99% over the last two years. This is Moore’s law of steroids. Barriers to adoption are unlikely to be economic, short-term corrections notwithstanding.
- Surprises ahead: We cannot imagine all the impacts AI will have, and we will be surprised. Looking back, the experts expected it to take decades to make the progress we’ve seen in the last five years, and few expected how current AI turns out to be really good at creative work (writing, art) in particular.
Where I disagree: the pace of change on the ground
Technical Roadblocks? Yes (but it doesn’t matter)
Technically, I think we’ll hit some roadblocks. My current opinion is that Situational Awareness underestimates the architectural challenges we still need to overcome.
Current LLMs are built on “attention” as the simplifying breakthrough. But this architecture inherently has limited internal mental state, likely crucial for persistent goals and nuanced understanding of their environment, such as noticing when they’re stuck in a non-productive loop. Addressing this may require significant architectural changes. In particular, having a persistent mental state makes training difficult, as the model’s output is no longer deterministically produced just by its input, but also the broader model context. It might be that the “world models” approach provides a manageable way for AIs to understand the context of their inputs and outputs. I worry, though, that we need to invent a more self-organising approach to training, probably including recursive connections, i.e., output looping back to input within the model’s neural network. However, this removes much of the massive training gains we won with the attention mechanism.
The paragraph above may be hopelessly naive (I’m not an expert), and anyway doesn’t really matter: the current models, with conservative extrapolation, are quite enough to completely change society. So, will they?
Societal uptake: Why it will be slower
1. Deploying new technology is never instant
History is full of examples of groundbreaking technologies taking far longer to reshape society than expected. For example, electricity: it’s fantastic, but still is from from universally available across the globe. To achieve its economic advantages, electrification needs an ecosystem: infrastructure, supply chains, capabilities, demand. You can’t use electricity in a factory until you have an economic context with known opportunities and demand, input materials, logistics networks, trained staff, conducive regulations, etc etc. This is why rebuilding an economy (e.g., Germany in 1945) is often far easier than creating economic growth from scratch: people remember how the networks worked and can reimplement them, rather than needing to solve all the pieces from scratch.
AI will face similar challenges. It can’t just be “dropped in” to most organisations or systems, even in agent form. If we think of AI today as a vast pool of really smart, low wage university graduates (with amnesia, though that maybe solved in coming years), then the challenge is clear: most organisations cannot productively absorb a big pool of such graduates, as there are bottlenecks elsewhere.
AI plus robotics can be argued to undermine this argument: just use robots to build the ecosystem too. But even this needs time: to build the robots, to build the factories that build the robots, to build the mines that provide the materials to the factories, etc.
2. AI will replace people bottom-up
The way AI replaces human labor will likely follow a bottom-up trajectory, starting with junior roles and tasks. To be clear though, not only (or even primarily) low-skill roles, but rather junior roles that can be done with a computer. That’s a lot of roles! But, starting at entry-level positions.
Why? Obviously, leaders rarely automate themselves. But beyond self-preservation, senior roles often involve judgment, relationships, and high-stakes decisions that stakeholders are reluctant to entrust to AI. For example, in a law firm, it’s easy to imagine junior associates being replaced by AI for drafting contracts or due diligences, but much harder to envision clients trusting AI with high-stakes negotiations typically handled by partners. Likewise CEOs: even if AI would probably do a better job … who would be brave enough to make that call?
Additionally, it’s easier to replace, for example, 50% of the seats in a standardised role, than 50% of a job done by a single person (i.e., a leader).
I expect we’ll see junior positions vanish faster than senior ones, hollowing out traditional career progression.
3. The “societal speed limit” on the rate of producing “losers”
Perhaps the most significant constraint on AI adoption will come from society itself. Disruption creates “winners and losers”, and the pace of that disruption matters. If AI displaces workers faster than society can absorb the shock, the resulting inequality could create enormous political and social backlash.
Let me suggest a principle:
- Society has an “immune response” to fight against change that produces lots of people who feel that their future prospects are deteriorating.
- The greater the rate (percentage of people per annum) at which which people are experiencing change that results in deteriorating prospects, the stronger the response.
- The response escalates from pressure on governments to regulate, to voting out those governments in favour of others that promise to act more firmly, all the way to destructive protests and ultimately revolution.
That is, society will “fight back” against change producing too large a share of people with deteriorating prospects, by finding leaders or actions that will successfully slow down the rate of change.
The “societal speed limit” isn’t just a concept—it’s a reality we’ve seen time and again. From the Luddites to modern protests against globalization, society resists changes that leave too many people behind. With AI, this principle will likely shape the pace of adoption as much as the technology itself.
The challenge isn’t just economic; it’s also generational. What happens when young people entering the workforce find fewer paths to meaningful employment? Youth unemployment could lead to disengagement, frustration, and instability, creating long-term societal challenges far beyond the immediate economic impact.
So where to?
To summarise:
- The paper Situational Awareness – The Decade Ahead paints in picture of extraordinarily disruptive and rapid change.
- It may underestimate some of the technical challenges, but the projections are so extreme that even a far slower technical pathway requires us to ask how, and how fast, society can change.
- Social and economic change will be slower than the paper expects, for three reasons:
- Deploying any technology requires networks, and any “silver bullet” from AI cannot instantly create the ecosystem for instant change.
- Change is likely to start bottom-up in the economy, affecting the youth first.
- Society has a “speed limit” for how rapidly change can produce people with deteriorating personal prospects. Exceed the speed limit, and society will force actions to slow the pace of change.
We are in for one hell of a ride in the years to come! Change will come incredibly quickly in some areas. For the rest, I believe it will come faster than most expect, in unexpected ways, but still slower than the Situational Awareness paper projects in its extreme scenarios.
It will affect the youth more quickly, and risk leaving parts of the world with less developed ecosystems even further behind.
The “societal speed limit” may slow the pace of change, but we should not expect this process to be comfortable, as that slowing may come from huge societal unrest. And through it all, we need to avoid a catastrophic AI-safety failure where AIs attack humanity, and avoid a superpower war.