Where to for the AI datacentre boom? Transformational utilities, and their bubbles.

Prediction: The AI datacentre industry will be another example of a recurring pattern I’ll call a “transformational utility”: an industry which is capital intensive, massively disruptive, and soon indispensable to the rest of the economy, but also undifferentiated. And therefore, for early equity holders, often disappointing.

The old playbook, again

AI is obviously transformative, but it’s not the first technology to rewire society. Let’s look at previous innovations such as canals, railways, steel, electricity, fibre internet, and mobile phone networks to see what we can learn about capital-intensive, society-changing inventions.

When societies reorganise around new infrastructure, the story tends to rhyme:

  1. Breakthrough + capex. A new invention arrives with vast promise, but equally vast capital requirements.
  2. Early scarcity. Capacity lags because capital projects take time to execute.
  3. “Bubble” phase. Those in the lead enjoy massive valuations, as they promise to dominate the revolution.
  4. Commoditisation. The buildout catches up with demand; the lack of differentiation in the underlying product exposes an inability to sustain high prices.
  5. Real growth continues. The sector keeps getting bigger and more valuable to society.
  6. Multiple compression. But the early players cannot maintain pricing power, valuations tend back down, and many early investors lose despite the sector’s real-world success.

Canals, railways, steel, electricity, fibre backbones, and mobile networks have all walked this path: The railway barons, US Steel, Edison Electric (later GE), Cisco, and most mobile networks enjoyed boom valuations at some point. Then returns normalised, even as their industries grew to multiples of their prior size.

Steel is a particularly interesting example. It sat at the centre of USSR and later Chinese industrial strategy. But as raw steel capacity become abundant, the USA’s path showed that long term economic leadership came from differentiated offerings downstream. (There is a geopolitical angle to steel which is re-emerging now: more on that below.)

Of course, there are high-valuation industries (and bubbles) that are NOT transformational utility bubbles, for example:

  • Tulips. Some bubbles centre on things with trivial enduring utility. AI compute isn’t that.
  • iPhones (or Rolexes). Some products sustain premium margins through differentiation and brand. Raw compute is not that either. For example, the mechanical-watch industry (Rolex, etc.) is worth more than ever before, because it has reframed itself as that of strongly-branded status symbols for men, not merely timekeepers.

Why AI data centres behave like utilities

The test for utility economics is interchangeability. If buyers view your product as equivalent across providers, price drifts towards (operating cost + cost of capital). Higher prices just attract new entrants, who can gain share until prices converge to the threshold for new entrants.

Compute is globally tradeable over networks. A data centre is just a building with electricity, cooling and connectivity, in which lots of matrix multiplications can be done. It may, in fact, be the most tradable of all the transformational utilities, as there are no technical reasons why we couldn’t put all our compute in one place on the planet.

The major structural reason to deviate from this gravity is geopolitics (data sovereignty, national security, sanctions, energy policy). Governments can and will localise capacity; they can also tax or subsidise it. But that’s political risk, not durable product differentiation, and it drives subsidies, not big financial returns.

So where’s the differentiation (and the excess return)?

Think of where differentiation exists in the AI stack:

  • Chips (e.g., NVIDIA). Moderate. Real technical edge and speed-of-innovation moats, but they’re cyclical and not guaranteed (ask Intel).
  • Data centres / cloud compute. Low (outside geopolitics). Scale and operations matter, but sameness dominates pricing power in the long run.
  • Models (LLMs, core algorithms). Moderate now, lower over time. Capabilities diffuse fast; weights leak; papers ship; open models improve. Most use cases allow users to freely swap between several LLMs.
  • Applications. High variance, real moats available. This is where long-term margin lives — exactly as electricity’s wealth accrued to the things using it, not the grid itself.

What this implies

I’m not predicting a dramatic bubble “burst” tomorrow. Scarcity can continue longer than sceptics expect, and AI is likely to be capacity constrained for a long time. But multiples for compute-heavy businesses should compress as capacity catches up. The companies will be fine, some shareholders won’t. (Cisco still exists; 1999 buyers are still unhappy.)

Infrastructure bets need a clear theory of longevity. OpenAI (and others) tying valuation to data-centre buildout only makes sense if controlling compute during the next few years catapults them into a leading position in a new post-AI world, in a way that didn’t happen to any of the previous darlings of transformational infrastructure. This might happen (AI is unusual enough to keep minds open) but it’s a high-conviction, high-timing bet.

Finally, to be really clear, I’m not predicting that the AI revolution will underwhelm. Far from it! Just that the actual buildout of data centres is something you might want to leave to someone else.

The AI singularity: Situational Awareness vs the Societal Speed Limit

It’s a good time of year to look back at the bigger questions facing us. So … AI it is! Here some of my current thoughts, mostly so that I can look back in five years time and laugh at how terribly naive I was / we were.

The paper Situational Awareness – The Decade Ahead paints an extraordinary picture of the next decade, one where AI transforms almost every aspect of society at a breakneck pace. It’s incredibly breathtaking in scope and implication, and well worth a read, as well as provoking the question as to whether change can really happen as rapidly as it claims. Let’s ask that question, and propose a “Societal Speed Limit” which I think will be the ultimate decider of the pace of AI-driven change.


Three points from the paper stood out most strongly for me:

1. Credible projections of an incredibly fast pace of improvement

The paper forecasts AI capabilities to advance at an astonishing rate, driven by improvements in hardware availability (compute), algorithms, and deployment methodologies (e.g., agentic tools). Together, this could give up to 10 orders of magnitude (i.e., ten billion times more capable AIs) over less than a decade—a staggering figure. Given that AI already exceeds human-level capabilities in many narrow-defined areas, this would inevitably change the world.

2. The Adoption Curve: Slow, Then Sudden

AI tools today often stall at the proof-of-concept stage, requiring integration and adaptation by organisations. But the emergence of human-level agents that can directly use existing tools without integration effort could act as a tipping point: “hire” an AI in the same way, using the same tools, as a human hire would use. This would immediately make most PoCs irrelevant, and make far more human roles open to AI improvement / replacement.

3. The Geopolitical Frame

The paper spends a lot of time on U.S.-China competition, arguing that AI leadership could define not just economic success but also military dominance. While this might be geopolitically accurate, it feels to me a bit sad that the focus moves so quickly to this specific great power competition of this point in time, given AI’s broader historical importance. This is possibly a pivotal point in the history of our species, or even life on earth! It’s a bit like imagining that the invention of a usable numerical systems was primarily about ancient Sumerian-Babylonian competition.


Where I agree

  • No ceiling in sight: Some suggest that AI is plateauing. This feels incredibly ambitious to claim, given that we’re barely more than two years into the post-ChatGPT world, and already far beyond the capabilities of ChatGPT 3. Every week is still bringing breakthroughs.
  • Cost as a non-constraint: Yes, AI is (arguably) expensive. But, for example, the costs of specific OpenAI capabilities have come down by ~99% over the last two years. This is Moore’s law of steroids. Barriers to adoption are unlikely to be economic, short-term corrections notwithstanding.
  • Surprises ahead: We cannot imagine all the impacts AI will have, and we will be surprised. Looking back, the experts expected it to take decades to make the progress we’ve seen in the last five years, and few expected how current AI turns out to be really good at creative work (writing, art) in particular.

Where I disagree: the pace of change on the ground

Technical Roadblocks? Yes (but it doesn’t matter)

Technically, I think we’ll hit some roadblocks. My current opinion is that Situational Awareness underestimates the architectural challenges we still need to overcome.

Current LLMs are built on “attention” as the simplifying breakthrough. But this architecture inherently has limited internal mental state, likely crucial for persistent goals and nuanced understanding of their environment, such as noticing when they’re stuck in a non-productive loop. Addressing this may require significant architectural changes. In particular, having a persistent mental state makes training difficult, as the model’s output is no longer deterministically produced just by its input, but also the broader model context. It might be that the “world models” approach provides a manageable way for AIs to understand the context of their inputs and outputs. I worry, though, that we need to invent a more self-organising approach to training, probably including recursive connections, i.e., output looping back to input within the model’s neural network. However, this removes much of the massive training gains we won with the attention mechanism.

The paragraph above may be hopelessly naive (I’m not an expert), and anyway doesn’t really matter: the current models, with conservative extrapolation, are quite enough to completely change society. So, will they?

Societal uptake: Why it will be slower

1. Deploying new technology is never instant

History is full of examples of groundbreaking technologies taking far longer to reshape society than expected. For example, electricity: it’s fantastic, but still is from from universally available across the globe. To achieve its economic advantages, electrification needs an ecosystem: infrastructure, supply chains, capabilities, demand. You can’t use electricity in a factory until you have an economic context with known opportunities and demand, input materials, logistics networks, trained staff, conducive regulations, etc etc. This is why rebuilding an economy (e.g., Germany in 1945) is often far easier than creating economic growth from scratch: people remember how the networks worked and can reimplement them, rather than needing to solve all the pieces from scratch.

AI will face similar challenges. It can’t just be “dropped in” to most organisations or systems, even in agent form. If we think of AI today as a vast pool of really smart, low wage university graduates (with amnesia, though that maybe solved in coming years), then the challenge is clear: most organisations cannot productively absorb a big pool of such graduates, as there are bottlenecks elsewhere.

AI plus robotics can be argued to undermine this argument: just use robots to build the ecosystem too. But even this needs time: to build the robots, to build the factories that build the robots, to build the mines that provide the materials to the factories, etc.

2. AI will replace people bottom-up

The way AI replaces human labor will likely follow a bottom-up trajectory, starting with junior roles and tasks. To be clear though, not only (or even primarily) low-skill roles, but rather junior roles that can be done with a computer. That’s a lot of roles! But, starting at entry-level positions.

Why? Obviously, leaders rarely automate themselves. But beyond self-preservation, senior roles often involve judgment, relationships, and high-stakes decisions that stakeholders are reluctant to entrust to AI. For example, in a law firm, it’s easy to imagine junior associates being replaced by AI for drafting contracts or due diligences, but much harder to envision clients trusting AI with high-stakes negotiations typically handled by partners. Likewise CEOs: even if AI would probably do a better job … who would be brave enough to make that call?

Additionally, it’s easier to replace, for example, 50% of the seats in a standardised role, than 50% of a job done by a single person (i.e., a leader).

I expect we’ll see junior positions vanish faster than senior ones, hollowing out traditional career progression.

3. The “societal speed limit” on the rate of producing “losers”

Perhaps the most significant constraint on AI adoption will come from society itself. Disruption creates “winners and losers”, and the pace of that disruption matters. If AI displaces workers faster than society can absorb the shock, the resulting inequality could create enormous political and social backlash.

Let me suggest a principle:

  • Society has an “immune response” to fight against change that produces lots of people who feel that their future prospects are deteriorating.
  • The greater the rate (percentage of people per annum) at which which people are experiencing change that results in deteriorating prospects, the stronger the response.
  • The response escalates from pressure on governments to regulate, to voting out those governments in favour of others that promise to act more firmly, all the way to destructive protests and ultimately revolution.

That is, society will “fight back” against change producing too large a share of people with deteriorating prospects, by finding leaders or actions that will successfully slow down the rate of change.

The “societal speed limit” isn’t just a concept—it’s a reality we’ve seen time and again. From the Luddites to modern protests against globalization, society resists changes that leave too many people behind. With AI, this principle will likely shape the pace of adoption as much as the technology itself.

The challenge isn’t just economic; it’s also generational. What happens when young people entering the workforce find fewer paths to meaningful employment? Youth unemployment could lead to disengagement, frustration, and instability, creating long-term societal challenges far beyond the immediate economic impact.


So where to?

To summarise:

  • The paper Situational AwarenessThe Decade Ahead paints in picture of extraordinarily disruptive and rapid change.
  • It may underestimate some of the technical challenges, but the projections are so extreme that even a far slower technical pathway requires us to ask how, and how fast, society can change.
  • Social and economic change will be slower than the paper expects, for three reasons:
    • Deploying any technology requires networks, and any “silver bullet” from AI cannot instantly create the ecosystem for instant change.
    • Change is likely to start bottom-up in the economy, affecting the youth first.
    • Society has a “speed limit” for how rapidly change can produce people with deteriorating personal prospects. Exceed the speed limit, and society will force actions to slow the pace of change.

We are in for one hell of a ride in the years to come! Change will come incredibly quickly in some areas. For the rest, I believe it will come faster than most expect, in unexpected ways, but still slower than the Situational Awareness paper projects in its extreme scenarios.

It will affect the youth more quickly, and risk leaving parts of the world with less developed ecosystems even further behind.

The “societal speed limit” may slow the pace of change, but we should not expect this process to be comfortable, as that slowing may come from huge societal unrest. And through it all, we need to avoid a catastrophic AI-safety failure where AIs attack humanity, and avoid a superpower war.

Book idea: How can organisations successfully reduce complexity??

Anyone got any good examples of organisations which have substantially simplified themselves?? Because I think there’s a fascinating popular social science book waiting to written here (similar to e.g., Collapse). But, the missing piece is good case studies or examples. In the meantime, here’s the setup for I mean by complexity, and why I think it’s a real problem that we need to work out how solve, without using wars.

What is complexity and why does it matter?

Bureaucracy, red tape, inertia, gridlock, overhead, “the swamp”: these are all synonyms for what I mean by complexity. It is any process, law, role or similar that may be well intentioned, or even useful, but has the side effect of making it slightly harder to “get things done” in an organisation.

Let’s look at three examples:

1. Governments

Government complexity shows up in endless laws, regulations, and bureaucracies. Each one imposes a burden—on individuals, businesses, and government itself, in compliance costs and in the increasing proportion of society dedicated to complying with laws and regulations, both inside and outside government. Tax codes are a great example, as are means-tested social security systems: for example, in the U.S., where the process of applying for benefits often costs recipients hours of effort for a payout that equates to less than minimum wage.

2. Businesses

In businesses, complexity tends to emerge as a response to failure: something goes wrong, and a new process is introduced to prevent it from happening again. Individually, these processes seem logical—reviews, audits, change management—but over time, they layer on top of each other, slowing innovation with the organisation. This is one reason startups often outcompete large corporations: they aren’t bogged down by decades of accumulated complexity (see also some other thoughts about how startups beat corporates).

3. Social Organisations

Many social organisations—like religious groups or nonprofits—start with a clear, compelling mission. But over time, they acquire buildings, hire staff, and create processes to manage growth. Eventually, sustaining the organisation itself can overshadow the original mission, with internal politics and inertia becoming barriers to action, or even replacing the original mission of the organisation.

How complexity arises and survives

The key to understanding complexity is recognising how it arises—and why it’s so hard to get rid of. At its core, complexity often results from a mismatch between diffuse costs and concentrated benefits.

Diffuse Costs

The costs of complexity—bureaucracy, paperwork, approval processes—are spread thinly across many people. No single person feels the full burden, so there’s little incentive for any individual to push back.

Concentrated Benefits

The benefits of complexity, on the other hand, are often concentrated on specific groups. For example, a new compliance rule might create a department of specialists whose jobs depend on maintaining that rule. Or a subsidy might benefit a particular industry, which then lobbies to keep it in place. Or a specific example of something that went wrong in the past can motivate a “never again” mindset, at almost any cost.

The Result: Unavoidable complexity!

This dynamic creates three big problems:

1. No Natural Constituency for Simplicity: Few people are motivated to fight complexity because its costs are so widely dispersed.

2. Strong Pushback Against Change: Those who benefit from complexity—either directly or by avoiding past problems—resist efforts to simplify.

3. Weak Incentives for Leaders: Simplifying a system often requires expending political or organisational capital, but the benefits are diffuse and realise only in the long term, while the backlash is immediate and concentrated. Simplification seldom, in practice, wins votes.

How we have historically solved excessive complexity?

Complexity can’t grow forever—eventually, the costs outweigh the benefits, and something has to give. But history shows us that reducing complexity is extraordinarily difficult. In fact, the ways we typically “solve” complexity are themselves extreme:

In businesses or social organisations: Overwhelmed by complexity, organisations often stagnate and are replaced by younger, simpler competitors. This cycle of “creative destruction” keeps the private sector relatively nimble.

But governments are not as easily replaced—they persist. The usual solutions to government complexity are much harsher: war or revolution. Wartime often forces governments to streamline decision-making, while revolutions reset the system entirely.

This last observation leads to the key question: How can we create organisations that can reduce their own complexity? And how can we use this in government, so as not to require periodic wars or revolutions? Given that large-scale warfare is a tragedy for those involved, but also with modern weapons, an existential threat to humanity?


What a complexity-resistant organisation look like?

This is where some really great examples and case studies would be helpful!

There are a few approaches that come to mind—though none (yet) compelling:

1. “Balanced Complexity Budget

Similar to “balanced (fiscal) budget” requirements, governments could adopt a “complexity budget,” requiring that for every new page of law or regulation, an equivalent amount is removed. This would ensure that complexity is constrained (or allowed to grow only in proportion to, say, GDP or population size). Of course, this approach only works on forms of complexity we can measure effectively (pages of regulation), and not elsewhere.

2. Zero-Based Budgeting

In businesses, zero-based budgeting forces organisations to start their planning from scratch each year, rather than simply adding a percentage on to last year’s budget. But, in practice I’ve seldom seen this approach producing a outcome significantly different to standard budgeting.

Emerging case studies?

Two current examples come to mind, though neither is perfect:

1. Argentina

Javier Milei’s government in Argentina has undertaken massive efforts to reduce the size and complexity of the government, with dramatic effects on, for example, inflation, as well huge unhappiness (as expected) from the affected constituencies. It’s too early to tell whether these reforms will succeed, and anyway, they became possible only after years of near-total economic collapse, so hardly represent an easily-replicable approach.

2. Government Efficiency Initiatives

Efforts like the U.S. Department of Governmental Efficiency (under the Trump administration) aim to reduce regulatory complexity. However, I’m not sure there’s the political alignment in the Trump camp to make them succeed, as seen already in disagreement on the direction (more vs less) of immigration regulation and tariffs, as well as the generally rapid turnover amongst those considered Trump’s favourites.

If you can think of any compelling examples of successful complexity reduction, let’s discuss—drop your thoughts in the comments!

What happened at SVB

Here’s my very simplified version of what happened to bring down SVB: and specifically, why no-one seemed to have seen it coming. Informed very much by Matt Levine’s excellent writing on the topic. Thoughts are purely my own, not representing any organisation.

At core, we need to look at a simplified model of what banks like SVB do, and especially, what then happens when interest rates change. Essentially, banks:

  1. Take in deposits, from individuals and businesses. Deposits are generally low-interest, and as interest rates rise, only a little of that is passed on to deposit accounts.
  2. Put all that money somewhere! Broadly, there are two options:
    • A. Loan-like instruments (e.g., home loans, business loans). These are often floating-rate, i.e., their interest rates follow market rates, but they are also very “illiquid” (hard to sell or otherwise turn into cash). If a bank makes a home loan for a specific house, it can’t easily get that money back immediately.
    • B. Bond-like instruments, like corporate debt. These are typically fixed interest rate, but they are liquid (easy to sell).

Now, what happens when interest rates go up? Deposits and bonds rates don’t really change much, but loan interest rates rise. This is an “endowment effect” that leads banks, all else being stable, to make more money when interests rise: their “Net Interest Income” (NII) rises as interest rates rise.

Great! Next question, what happens if, for some reason, a lot of depositors want their money back at once? The bank would eventually run out of cash reserves, and need to sell some bonds (as the loans are hard to sell). But here’s a problem: bonds have a fixed interest rate, but their market value decreases when interest rates rise, because new investors would rather buy new bonds offering a higher rate, than your old low-rate bonds. When a bank holds a bond to maturity, that’s not a problem — they get back the full face value of the bond. But, if a bond needs to be sold early, and the interest rates have risen, the seller will take a loss. At worst case, a bank being forced to sell lots of bonds could make a huge loss, which overwhelms its capital reserves, and leaves it insolvent.

Normally, this is irrelevant, as this only happens if a bank has to sell bonds early, i.e., has a massive outflow of deposits, a bank run. There are many mechanisms to prevent this:

  • deep relationships between the bank and it’s customers;
  • a wide variety of depositors, many of whom don’t really follow the finer points of financial news and so are fairly “sticky”;
  • deposit insurance;
  • capital buffers, regulatory supervision, risk modelling, etc etc.;
  • and hedges. Let’s talk about these.

Clearly, it would conceptually be useful for banks to be able to deploy cash in instruments that have both floating interest rates (and so do not lose market value when interest rates rise), and also highly liquid. You could imagine two ways to do that:

  1. Make loans more liquid, by, let’s say, packaging groups of similar loans into standardised instruments (call them “CDOs”), splitting them into tranches by risk, getting ratings agencies to rate them, and then create a liquid market for them. There’s a problem with though: it removes the risk from the loan originators, leading to perverse incentives that lead to bad quality loans, and you get the 2008 financial crisis. So, let’s not do this.
  2. Make bonds that don’t lose market value when interest rates rise. This can, broadly, be done by banks through hedging on interest rates. Then, when interest rates rise, the bonds lose market value, but the hedges make money to roughly counteract that effect, and vice-verse. This is a great idea, in general!

So why did SVB not have hedges in place? It seems that they were worried about what happens when interest rates fall: if hedges make money when rates rise, they obviously lose money when rates fall. Combined with the negative endowment effect on loans, this can make falling rates pretty bad for bank profitability. So, it seems that SVB dismantled much of their hedging in 2022, to take profits and to avoid losses if/when rates fell again. And this would have been fine, as long as we didn’t get both a rise in interest rates and a lot of depositors wanting their money back. Of course, that’s then what happened, and clearly the bank’s risk scenario testing was insufficient.

So let’s put this together into what led to SVB’s collapse:

  1. An (unrealised, theoretical) mark-to-market loss on bond holdings, due to:
    • lots of bonds relative to loans, at SVB, due to their client base of startups being relatively cash-rich and loan-light
    • insufficient hedging, due to concerns of the impact of hedges on profitability if rates were to fall.
  2. An unprecedented drop in deposits, due to:
    • a depositor base suddenly becoming less cash-rich, due to the sudden slowdown in VC funding to startups
    • a depositor base unusually prone to runs, because most of it was in deposits that exceeded the deposit insurance maximums, and came from depositors that were NOT diverse, as most startups (and especially their VC shareholders) were on the same Whatsapp groups
    • modern banking apps making it way easier to move cash out of a bank — no more queueing on the steps of the bank
    • some communication accidents and mistakes that flagged the theoretical massive losses on the bank’s bond holdings at market price.
  3. An inability to find extra liquidity to cover the gap:
    • SVB tried to raise further equity, but this failed and just contributed to the communication of the point just above, i.e., accelerated the deposit flight
    • emergency funding from the Fed, backed by bond holdings, would have had to have been done at market prices for bonds, thereby realising the theoretical mark-to-market losses, and leading to insolvency. Catch-22!

So my guess is, we’ll see regulatory changes and/or focus on requiring banks to model the impact of interest rate changes, not only on profitability and cash flow, but also on a bank’s ability to liquidate assets at short notice, without taking prohibitive market price losses.

More Predictions

It’s time for some more predictions! Last time I did that (My predictions for the next 10, 20, 30 years) I was, if anything, too conservative Although, the section of “Black Swans” at the bottom was particularly accurate… (and has NOT been edited since it was written).

Photo by Drew Beamer on Unsplash

So here goes! Where do I think common opinion is wrong, especially in South Africa?

Electric vehicles and renewables

  1. By 2030, 40%+ of new cars sold in South Africa will be pure electric, with East Africa (eg Kenya) a bit behind, and West Africa a bit further behind. This seems inevitable when looking at the promises from major Western car companies, and the pace of innovation and falling prices from Chinese car companies.
  2. Due to 1., by 2030 the demand for petrol in South Africa will be falling 3%+ per year; and electricity demand will be growing 1% per year due to electric cars (though it may be falling for other reasons).
  3. Despite decarbonisation of electricity, and the growth of electric cars, the price of electricity in major global markets will NOT rise significantly from today’s levels, and may even fall, due to rapid rollout of solar and other renewables. South Africa is a special case depending on Eskom’s finances.
  4. By 2030, there will be large businesses built on taking advantage of near-free electricity during sunny hours (e.g., bulk hydrogen production), in several global markets.
  5. There will never again be a major (>300MW) coal power station built in South Africa.
  6. Kusile coal power-station will stop operating (or at least have been converted off coal) well before 30 years (2050), despite a design lifetime of 50+ years (around 2070). Which means an even bigger disaster for its return on capital.

Consumer trends over next 10-15 years

  1. The distinction between FMCG company / “brand” (designs and coordinates the manufacture of products, high margins, high marketing spend %, little direct consumer interaction) and retailer (sells products from brands, low margin, high volume,  low marketing spend %) will continue to blur in both directions, to an effective spectrum; plus there will be new logistics business models beyond traditional retailers, that aggregate deliveries from multiple other players (i.e., Instacart model evolved further).
  2. Traditional monolithic brands will fragment in favour of increasing numbers of niche brands with more authenticity and story. New “meta-brands” will appear, in the form of structured ranges of endorsements by influencers.
  3.  By 2030, 20%+ of “meat-like” products sold in upper-end grocery stores will be plant-based (i.e., non-animal).
  4.  By 2035, we will routinely take individualised medical probiotics in order to tune our gut biota, as treatment for a wide variety of complaints.

Finance

  1. By 2035, it will be functionally impossible for “legitimate” companies and individuals to use tax havens and financial engineering to pay near-zero taxes on profits or income.
  2. There are fortunes that will still be made in simplifying the payment of paper (or PDF) invoices, using machine learning text recognition to automatically load payment requests via bank apps/APIs. This will happen far faster than we can persuade people to stop using paper-based invoices for billing.

Why does the start up industry beat corporates?


We seem to have, today, an unparalleled explosion in young, new companies, pioneering new products or ways of doing business, and thereby disrupting seemingly invincible pillars of our economy through explosive growth — commonly called startups. How is this possible?

Photo by Ian Schneider on Unsplash

Startups face a seemingly impossible challenge: they seek to build successful businesses from nothing. To do so, they need products that are so much better than alternatives that customers choose to use the new products, despite the lack of any brand recognition. These products need to be built on a shoe-string budget (at least initially), and quickly, by a team of founders that are working with limited resources, limited structures and few established commercial relationships. How can this ever work? Why don’t bigger companies, with access to all the same new technologies, lots of resources and skilled staff, a brand, and sales and marketing teams, win every time?

The answer often comes down to two things: startups have a completely crazy idea that actually works, and/or they are unreasonably good at something.

Continue reading “Why does the start up industry beat corporates?”

“Africa at work” report finally published

The report I’ve spent quite a few months working on has been published — Africa at work: Job creation and inclusive growth. We look at the state of employment in Africa, and what needs to be done to create more wage-paying jobs. It’s awesome to see it getting lots of media attention, but also just good to get it out — it was a lot of work!

In other news, Claire and I are back in Johannesburg after a great year in London and a month of travel in Europe. I’m on a leave of absence for another month or so, still enjoying a more relaxed life!

The energy challenge

I just went to the first of a new lecture series at Caltech, NRG 0.1, during which various experts are going to be discussing various aspects of the energy problem (for which read “challenge”) that the world is facing.

This week was Steve Koonin, former Caltech provost and physics professor, and currently chief scientist for BP. I thought it was an excellent talk, covering a lot of the different aspects to the energy question, and some important principles that need to be kept in mind when looking for solutions in the near and medium term. I particularly enjoyed (and, yes, this probably says something about me too) how the talk assembled a large collection of numbers into a few key “back-of-the-envelope” facts, and then analysed the various options in terms of these constraints. While I’m not going to summarise the whole talk (which will hopefully be available here soon), here are some of the things which stood out:

2050 / twice pre-industrial
By BP’s Business as Usual (BAU) analysis, sometime before 2050 CO2 will hit twice pre-industrial atmospheric levels. This is a tipping point in many models, and so serves as a useful “safe” upper limit. Anything we do has to have a big effect well before 2050.

Running out of oil vs. global warming
A few years ago I was more concerned about the former; now I think I’m more concerned about the latter. The global economy is handling the high oil prices very well, so non-conventional oil, like oil sands in Canada, really start to look accessible. Oil prices may stay high, and national concerns about oil supply security may discourage oil use, but I think it’s here for a few more decades. My take home message: global warming will be solved, or not, before oil runs out.

CO2 has to drop hugely
CO2 has a lifetime of many centuries once it’s in the atmosphere. Thus to reach CO2 stability at twice pre-industrial levels by 2050, we actually need to cut emissions by about half from today’s level. (A useful figure: due to CO2 longevity, a drop of 10% in CO2 emissions growth delays by about 7 years the crossing of any given atmospheric CO2 concentration). But by business as usual estimates, economic growth, even including historically extrapolated improvements in efficiency, will have raised emissions by a factor of 4. So we have to improve somehow by a factor of 8. As Koonin points out, efficiency gains are generally overwhelmed by increased consumption.

CO2 drops have to start now
As CO2 stays in the atmosphere, delaying change by a few years’ delay makes the required drops much larger in future. Furthermore, the main drivers of emissions (power plants, houses, cars, etc.) all have lifetimes of decades — so the power plants being built now will still be emitting by 2050. Basically, if nothing dramatic changes in the next 5 to 10 years, stability by 2050 becomes nearly impossible.

Many “solutions” just don’t scale
There’s huge enthusiasm for corn-based biofuels in the US at the moment. Koonin’s figures were that about 20% of the corn crop is now going to fuels, contributing about 2% of the US’s transport fuel needs. This doesn’t scale to solve the problem. Another example: solar. It’s a lot more expensive, and so will never be accepted commercially. But even if it was, we need to cover (if I recall the figure) a million rooftops with solar panels every year, starting right now, to reach stability by 2050. I’m not sure if that was globally or just the US.

$30/ton CO2
Currently, emitting CO2 is free in most places (Europe is a partial exception). That makes coal the cheapest power source. Most emissions reduction schemes assign a cost, one way or another, to CO2. Koonin had an interesting comparison graph: below about $20/ton CO2, coal remains cheapest. Above about $40/ton, there are no further major changes to the ordering of energy sources. So the magic number of balancing economic cost and yet still changing behaviour is around $30/ton. This would add only about 15% to the cost of petrol in the US or SA, and a little less in Europe, say. So the biggest changes will be in fixed electrical generation plants (which anyway are the biggest emitters).

The plan
Koonin’s take on matters, and I think I agree, is that given the size and cost of the changes needed, as well as their urgency, market forces have to be used to make changes. That is, we can’t pick an “ideal solution” and decree that that is what will be done — the political will isn’t there over the time scale required. Rather, the correct policy incentives need to be put in place right now — like a fixed, predictable cost for CO2 (which, interestingly, argues against a cap-and-trade approach), for the next 50 years. Without such definiteness, it becomes really hard for power companies to spend, say, an extra billion dollars now on a power plant that does CO2 sequestration.

Koonin’s roadmap would seem to be: policy incentives right now, leading to CO2 sequestering power plants still running predominantly off fossil fuels; a growing but still far from dominant contribution from sustainable power sources; and revolutionary improvements in next generation biofuels (using plant material that we do not, in fact, want to eat). He justifies hope in a biofuel revolution by pointing out that biotechnology is a very young and rapidly developing field — unlike, say, fusion. He also thinks there’s a chance for a solar revolution, but not with current technology.

As I overheard a participant say on the way out, though, “He could have given a much more pessimistic talk with the exact same slides”. We do have to make immediate, dramatic changes to an area of human endeavour that has vast pre-existing infrastructure, very long time-lines and huge costs. This for a problem that is hard to easily demonstrate now, and exists over a time scale far longer than political cycles. I think there’s a fair chance that, come 2050, we’ll have to be involved in some sort of huge active geoengineering (ie. a modification designed to “cancel out” our CO2 emissions), in order to stabilise the climate.

Why “Peak Oil” isn’t what really worries me

To flog a dead horse, here’s another post on oil depletion. This one is a few thoughts, mostly rebuttals to some points that have arisen about the validity of the argument around “Peak Oil” — that we’re a few years away from the greatest oil production we’ll ever see, and it’s downhill from there.

This post follows from my post on Price Elasticity of Oil, as well as this post on blogwaffe, and a whole collection of excellent, but scary, posts on Ted Brenner’s blog.

One of the more common replies to Peak Oil concerns is that oil production is not merely a function of how much oil there is in the ground, but rather a raft of other factors — such as the price of oil (determing what deposits are economical to drill), technology, investment in expanding existing fields, and political stability. I have two points here: the problems of keeping up with demand, and what higher prices mean.
Continue reading “Why “Peak Oil” isn’t what really worries me”