The AI singularity: Situational Awareness vs the Societal Speed Limit

It’s a good time of year to look back at the bigger questions facing us. So … AI it is! Here some of my current thoughts, mostly so that I can look back in five years time and laugh at how terribly naive I was / we were.

The paper Situational Awareness – The Decade Ahead paints an extraordinary picture of the next decade, one where AI transforms almost every aspect of society at a breakneck pace. It’s incredibly breathtaking in scope and implication, and well worth a read, as well as provoking the question as to whether change can really happen as rapidly as it claims. Let’s ask that question, and propose a “Societal Speed Limit” which I think will be the ultimate decider of the pace of AI-driven change.


Three points from the paper stood out most strongly for me:

1. Credible projections of an incredibly fast pace of improvement

The paper forecasts AI capabilities to advance at an astonishing rate, driven by improvements in hardware availability (compute), algorithms, and deployment methodologies (e.g., agentic tools). Together, this could give up to 10 orders of magnitude (i.e., ten billion times more capable AIs) over less than a decade—a staggering figure. Given that AI already exceeds human-level capabilities in many narrow-defined areas, this would inevitably change the world.

2. The Adoption Curve: Slow, Then Sudden

AI tools today often stall at the proof-of-concept stage, requiring integration and adaptation by organisations. But the emergence of human-level agents that can directly use existing tools without integration effort could act as a tipping point: “hire” an AI in the same way, using the same tools, as a human hire would use. This would immediately make most PoCs irrelevant, and make far more human roles open to AI improvement / replacement.

3. The Geopolitical Frame

The paper spends a lot of time on U.S.-China competition, arguing that AI leadership could define not just economic success but also military dominance. While this might be geopolitically accurate, it feels to me a bit sad that the focus moves so quickly to this specific great power competition of this point in time, given AI’s broader historical importance. This is possibly a pivotal point in the history of our species, or even life on earth! It’s a bit like imagining that the invention of a usable numerical systems was primarily about ancient Sumerian-Babylonian competition.


Where I agree

  • No ceiling in sight: Some suggest that AI is plateauing. This feels incredibly ambitious to claim, given that we’re barely more than two years into the post-ChatGPT world, and already far beyond the capabilities of ChatGPT 3. Every week is still bringing breakthroughs.
  • Cost as a non-constraint: Yes, AI is (arguably) expensive. But, for example, the costs of specific OpenAI capabilities have come down by ~99% over the last two years. This is Moore’s law of steroids. Barriers to adoption are unlikely to be economic, short-term corrections notwithstanding.
  • Surprises ahead: We cannot imagine all the impacts AI will have, and we will be surprised. Looking back, the experts expected it to take decades to make the progress we’ve seen in the last five years, and few expected how current AI turns out to be really good at creative work (writing, art) in particular.

Where I disagree: the pace of change on the ground

Technical Roadblocks? Yes (but it doesn’t matter)

Technically, I think we’ll hit some roadblocks. My current opinion is that Situational Awareness underestimates the architectural challenges we still need to overcome.

Current LLMs are built on “attention” as the simplifying breakthrough. But this architecture inherently has limited internal mental state, likely crucial for persistent goals and nuanced understanding of their environment, such as noticing when they’re stuck in a non-productive loop. Addressing this may require significant architectural changes. In particular, having a persistent mental state makes training difficult, as the model’s output is no longer deterministically produced just by its input, but also the broader model context. It might be that the “world models” approach provides a manageable way for AIs to understand the context of their inputs and outputs. I worry, though, that we need to invent a more self-organising approach to training, probably including recursive connections, i.e., output looping back to input within the model’s neural network. However, this removes much of the massive training gains we won with the attention mechanism.

The paragraph above may be hopelessly naive (I’m not an expert), and anyway doesn’t really matter: the current models, with conservative extrapolation, are quite enough to completely change society. So, will they?

Societal uptake: Why it will be slower

1. Deploying new technology is never instant

History is full of examples of groundbreaking technologies taking far longer to reshape society than expected. For example, electricity: it’s fantastic, but still is from from universally available across the globe. To achieve its economic advantages, electrification needs an ecosystem: infrastructure, supply chains, capabilities, demand. You can’t use electricity in a factory until you have an economic context with known opportunities and demand, input materials, logistics networks, trained staff, conducive regulations, etc etc. This is why rebuilding an economy (e.g., Germany in 1945) is often far easier than creating economic growth from scratch: people remember how the networks worked and can reimplement them, rather than needing to solve all the pieces from scratch.

AI will face similar challenges. It can’t just be “dropped in” to most organisations or systems, even in agent form. If we think of AI today as a vast pool of really smart, low wage university graduates (with amnesia, though that maybe solved in coming years), then the challenge is clear: most organisations cannot productively absorb a big pool of such graduates, as there are bottlenecks elsewhere.

AI plus robotics can be argued to undermine this argument: just use robots to build the ecosystem too. But even this needs time: to build the robots, to build the factories that build the robots, to build the mines that provide the materials to the factories, etc.

2. AI will replace people bottom-up

The way AI replaces human labor will likely follow a bottom-up trajectory, starting with junior roles and tasks. To be clear though, not only (or even primarily) low-skill roles, but rather junior roles that can be done with a computer. That’s a lot of roles! But, starting at entry-level positions.

Why? Obviously, leaders rarely automate themselves. But beyond self-preservation, senior roles often involve judgment, relationships, and high-stakes decisions that stakeholders are reluctant to entrust to AI. For example, in a law firm, it’s easy to imagine junior associates being replaced by AI for drafting contracts or due diligences, but much harder to envision clients trusting AI with high-stakes negotiations typically handled by partners. Likewise CEOs: even if AI would probably do a better job … who would be brave enough to make that call?

Additionally, it’s easier to replace, for example, 50% of the seats in a standardised role, than 50% of a job done by a single person (i.e., a leader).

I expect we’ll see junior positions vanish faster than senior ones, hollowing out traditional career progression.

3. The “societal speed limit” on the rate of producing “losers”

Perhaps the most significant constraint on AI adoption will come from society itself. Disruption creates “winners and losers”, and the pace of that disruption matters. If AI displaces workers faster than society can absorb the shock, the resulting inequality could create enormous political and social backlash.

Let me suggest a principle:

  • Society has an “immune response” to fight against change that produces lots of people who feel that their future prospects are deteriorating.
  • The greater the rate (percentage of people per annum) at which which people are experiencing change that results in deteriorating prospects, the stronger the response.
  • The response escalates from pressure on governments to regulate, to voting out those governments in favour of others that promise to act more firmly, all the way to destructive protests and ultimately revolution.

That is, society will “fight back” against change producing too large a share of people with deteriorating prospects, by finding leaders or actions that will successfully slow down the rate of change.

The “societal speed limit” isn’t just a concept—it’s a reality we’ve seen time and again. From the Luddites to modern protests against globalization, society resists changes that leave too many people behind. With AI, this principle will likely shape the pace of adoption as much as the technology itself.

The challenge isn’t just economic; it’s also generational. What happens when young people entering the workforce find fewer paths to meaningful employment? Youth unemployment could lead to disengagement, frustration, and instability, creating long-term societal challenges far beyond the immediate economic impact.


So where to?

To summarise:

  • The paper Situational AwarenessThe Decade Ahead paints in picture of extraordinarily disruptive and rapid change.
  • It may underestimate some of the technical challenges, but the projections are so extreme that even a far slower technical pathway requires us to ask how, and how fast, society can change.
  • Social and economic change will be slower than the paper expects, for three reasons:
    • Deploying any technology requires networks, and any “silver bullet” from AI cannot instantly create the ecosystem for instant change.
    • Change is likely to start bottom-up in the economy, affecting the youth first.
    • Society has a “speed limit” for how rapidly change can produce people with deteriorating personal prospects. Exceed the speed limit, and society will force actions to slow the pace of change.

We are in for one hell of a ride in the years to come! Change will come incredibly quickly in some areas. For the rest, I believe it will come faster than most expect, in unexpected ways, but still slower than the Situational Awareness paper projects in its extreme scenarios.

It will affect the youth more quickly, and risk leaving parts of the world with less developed ecosystems even further behind.

The “societal speed limit” may slow the pace of change, but we should not expect this process to be comfortable, as that slowing may come from huge societal unrest. And through it all, we need to avoid a catastrophic AI-safety failure where AIs attack humanity, and avoid a superpower war.

Book idea: How can organisations successfully reduce complexity??

Anyone got any good examples of organisations which have substantially simplified themselves?? Because I think there’s a fascinating popular social science book waiting to written here (similar to e.g., Collapse). But, the missing piece is good case studies or examples. In the meantime, here’s the setup for I mean by complexity, and why I think it’s a real problem that we need to work out how solve, without using wars.

What is complexity and why does it matter?

Bureaucracy, red tape, inertia, gridlock, overhead, “the swamp”: these are all synonyms for what I mean by complexity. It is any process, law, role or similar that may be well intentioned, or even useful, but has the side effect of making it slightly harder to “get things done” in an organisation.

Let’s look at three examples:

1. Governments

Government complexity shows up in endless laws, regulations, and bureaucracies. Each one imposes a burden—on individuals, businesses, and government itself, in compliance costs and in the increasing proportion of society dedicated to complying with laws and regulations, both inside and outside government. Tax codes are a great example, as are means-tested social security systems: for example, in the U.S., where the process of applying for benefits often costs recipients hours of effort for a payout that equates to less than minimum wage.

2. Businesses

In businesses, complexity tends to emerge as a response to failure: something goes wrong, and a new process is introduced to prevent it from happening again. Individually, these processes seem logical—reviews, audits, change management—but over time, they layer on top of each other, slowing innovation with the organisation. This is one reason startups often outcompete large corporations: they aren’t bogged down by decades of accumulated complexity (see also some other thoughts about how startups beat corporates).

3. Social Organisations

Many social organisations—like religious groups or nonprofits—start with a clear, compelling mission. But over time, they acquire buildings, hire staff, and create processes to manage growth. Eventually, sustaining the organisation itself can overshadow the original mission, with internal politics and inertia becoming barriers to action, or even replacing the original mission of the organisation.

How complexity arises and survives

The key to understanding complexity is recognising how it arises—and why it’s so hard to get rid of. At its core, complexity often results from a mismatch between diffuse costs and concentrated benefits.

Diffuse Costs

The costs of complexity—bureaucracy, paperwork, approval processes—are spread thinly across many people. No single person feels the full burden, so there’s little incentive for any individual to push back.

Concentrated Benefits

The benefits of complexity, on the other hand, are often concentrated on specific groups. For example, a new compliance rule might create a department of specialists whose jobs depend on maintaining that rule. Or a subsidy might benefit a particular industry, which then lobbies to keep it in place. Or a specific example of something that went wrong in the past can motivate a “never again” mindset, at almost any cost.

The Result: Unavoidable complexity!

This dynamic creates three big problems:

1. No Natural Constituency for Simplicity: Few people are motivated to fight complexity because its costs are so widely dispersed.

2. Strong Pushback Against Change: Those who benefit from complexity—either directly or by avoiding past problems—resist efforts to simplify.

3. Weak Incentives for Leaders: Simplifying a system often requires expending political or organisational capital, but the benefits are diffuse and realise only in the long term, while the backlash is immediate and concentrated. Simplification seldom, in practice, wins votes.

How we have historically solved excessive complexity?

Complexity can’t grow forever—eventually, the costs outweigh the benefits, and something has to give. But history shows us that reducing complexity is extraordinarily difficult. In fact, the ways we typically “solve” complexity are themselves extreme:

In businesses or social organisations: Overwhelmed by complexity, organisations often stagnate and are replaced by younger, simpler competitors. This cycle of “creative destruction” keeps the private sector relatively nimble.

But governments are not as easily replaced—they persist. The usual solutions to government complexity are much harsher: war or revolution. Wartime often forces governments to streamline decision-making, while revolutions reset the system entirely.

This last observation leads to the key question: How can we create organisations that can reduce their own complexity? And how can we use this in government, so as not to require periodic wars or revolutions? Given that large-scale warfare is a tragedy for those involved, but also with modern weapons, an existential threat to humanity?


What a complexity-resistant organisation look like?

This is where some really great examples and case studies would be helpful!

There are a few approaches that come to mind—though none (yet) compelling:

1. “Balanced Complexity Budget

Similar to “balanced (fiscal) budget” requirements, governments could adopt a “complexity budget,” requiring that for every new page of law or regulation, an equivalent amount is removed. This would ensure that complexity is constrained (or allowed to grow only in proportion to, say, GDP or population size). Of course, this approach only works on forms of complexity we can measure effectively (pages of regulation), and not elsewhere.

2. Zero-Based Budgeting

In businesses, zero-based budgeting forces organisations to start their planning from scratch each year, rather than simply adding a percentage on to last year’s budget. But, in practice I’ve seldom seen this approach producing a outcome significantly different to standard budgeting.

Emerging case studies?

Two current examples come to mind, though neither is perfect:

1. Argentina

Javier Milei’s government in Argentina has undertaken massive efforts to reduce the size and complexity of the government, with dramatic effects on, for example, inflation, as well huge unhappiness (as expected) from the affected constituencies. It’s too early to tell whether these reforms will succeed, and anyway, they became possible only after years of near-total economic collapse, so hardly represent an easily-replicable approach.

2. Government Efficiency Initiatives

Efforts like the U.S. Department of Governmental Efficiency (under the Trump administration) aim to reduce regulatory complexity. However, I’m not sure there’s the political alignment in the Trump camp to make them succeed, as seen already in disagreement on the direction (more vs less) of immigration regulation and tariffs, as well as the generally rapid turnover amongst those considered Trump’s favourites.

If you can think of any compelling examples of successful complexity reduction, let’s discuss—drop your thoughts in the comments!

What happened at SVB

Here’s my very simplified version of what happened to bring down SVB: and specifically, why no-one seemed to have seen it coming. Informed very much by Matt Levine’s excellent writing on the topic. Thoughts are purely my own, not representing any organisation.

At core, we need to look at a simplified model of what banks like SVB do, and especially, what then happens when interest rates change. Essentially, banks:

  1. Take in deposits, from individuals and businesses. Deposits are generally low-interest, and as interest rates rise, only a little of that is passed on to deposit accounts.
  2. Put all that money somewhere! Broadly, there are two options:
    • A. Loan-like instruments (e.g., home loans, business loans). These are often floating-rate, i.e., their interest rates follow market rates, but they are also very “illiquid” (hard to sell or otherwise turn into cash). If a bank makes a home loan for a specific house, it can’t easily get that money back immediately.
    • B. Bond-like instruments, like corporate debt. These are typically fixed interest rate, but they are liquid (easy to sell).

Now, what happens when interest rates go up? Deposits and bonds rates don’t really change much, but loan interest rates rise. This is an “endowment effect” that leads banks, all else being stable, to make more money when interests rise: their “Net Interest Income” (NII) rises as interest rates rise.

Great! Next question, what happens if, for some reason, a lot of depositors want their money back at once? The bank would eventually run out of cash reserves, and need to sell some bonds (as the loans are hard to sell). But here’s a problem: bonds have a fixed interest rate, but their market value decreases when interest rates rise, because new investors would rather buy new bonds offering a higher rate, than your old low-rate bonds. When a bank holds a bond to maturity, that’s not a problem — they get back the full face value of the bond. But, if a bond needs to be sold early, and the interest rates have risen, the seller will take a loss. At worst case, a bank being forced to sell lots of bonds could make a huge loss, which overwhelms its capital reserves, and leaves it insolvent.

Normally, this is irrelevant, as this only happens if a bank has to sell bonds early, i.e., has a massive outflow of deposits, a bank run. There are many mechanisms to prevent this:

  • deep relationships between the bank and it’s customers;
  • a wide variety of depositors, many of whom don’t really follow the finer points of financial news and so are fairly “sticky”;
  • deposit insurance;
  • capital buffers, regulatory supervision, risk modelling, etc etc.;
  • and hedges. Let’s talk about these.

Clearly, it would conceptually be useful for banks to be able to deploy cash in instruments that have both floating interest rates (and so do not lose market value when interest rates rise), and also highly liquid. You could imagine two ways to do that:

  1. Make loans more liquid, by, let’s say, packaging groups of similar loans into standardised instruments (call them “CDOs”), splitting them into tranches by risk, getting ratings agencies to rate them, and then create a liquid market for them. There’s a problem with though: it removes the risk from the loan originators, leading to perverse incentives that lead to bad quality loans, and you get the 2008 financial crisis. So, let’s not do this.
  2. Make bonds that don’t lose market value when interest rates rise. This can, broadly, be done by banks through hedging on interest rates. Then, when interest rates rise, the bonds lose market value, but the hedges make money to roughly counteract that effect, and vice-verse. This is a great idea, in general!

So why did SVB not have hedges in place? It seems that they were worried about what happens when interest rates fall: if hedges make money when rates rise, they obviously lose money when rates fall. Combined with the negative endowment effect on loans, this can make falling rates pretty bad for bank profitability. So, it seems that SVB dismantled much of their hedging in 2022, to take profits and to avoid losses if/when rates fell again. And this would have been fine, as long as we didn’t get both a rise in interest rates and a lot of depositors wanting their money back. Of course, that’s then what happened, and clearly the bank’s risk scenario testing was insufficient.

So let’s put this together into what led to SVB’s collapse:

  1. An (unrealised, theoretical) mark-to-market loss on bond holdings, due to:
    • lots of bonds relative to loans, at SVB, due to their client base of startups being relatively cash-rich and loan-light
    • insufficient hedging, due to concerns of the impact of hedges on profitability if rates were to fall.
  2. An unprecedented drop in deposits, due to:
    • a depositor base suddenly becoming less cash-rich, due to the sudden slowdown in VC funding to startups
    • a depositor base unusually prone to runs, because most of it was in deposits that exceeded the deposit insurance maximums, and came from depositors that were NOT diverse, as most startups (and especially their VC shareholders) were on the same Whatsapp groups
    • modern banking apps making it way easier to move cash out of a bank — no more queueing on the steps of the bank
    • some communication accidents and mistakes that flagged the theoretical massive losses on the bank’s bond holdings at market price.
  3. An inability to find extra liquidity to cover the gap:
    • SVB tried to raise further equity, but this failed and just contributed to the communication of the point just above, i.e., accelerated the deposit flight
    • emergency funding from the Fed, backed by bond holdings, would have had to have been done at market prices for bonds, thereby realising the theoretical mark-to-market losses, and leading to insolvency. Catch-22!

So my guess is, we’ll see regulatory changes and/or focus on requiring banks to model the impact of interest rate changes, not only on profitability and cash flow, but also on a bank’s ability to liquidate assets at short notice, without taking prohibitive market price losses.

Why does the start up industry beat corporates?


We seem to have, today, an unparalleled explosion in young, new companies, pioneering new products or ways of doing business, and thereby disrupting seemingly invincible pillars of our economy through explosive growth — commonly called startups. How is this possible?

Photo by Ian Schneider on Unsplash

Startups face a seemingly impossible challenge: they seek to build successful businesses from nothing. To do so, they need products that are so much better than alternatives that customers choose to use the new products, despite the lack of any brand recognition. These products need to be built on a shoe-string budget (at least initially), and quickly, by a team of founders that are working with limited resources, limited structures and few established commercial relationships. How can this ever work? Why don’t bigger companies, with access to all the same new technologies, lots of resources and skilled staff, a brand, and sales and marketing teams, win every time?

The answer often comes down to two things: startups have a completely crazy idea that actually works, and/or they are unreasonably good at something.

Continue reading “Why does the start up industry beat corporates?”