What happened at SVB

Here’s my very simplified version of what happened to bring down SVB: and specifically, why no-one seemed to have seen it coming. Informed very much by Matt Levine’s excellent writing on the topic. Thoughts are purely my own, not representing any organisation.

At core, we need to look at a simplified model of what banks like SVB do, and especially, what then happens when interest rates change. Essentially, banks:

  1. Take in deposits, from individuals and businesses. Deposits are generally low-interest, and as interest rates rise, only a little of that is passed on to deposit accounts.
  2. Put all that money somewhere! Broadly, there are two options:
    • A. Loan-like instruments (e.g., home loans, business loans). These are often floating-rate, i.e., their interest rates follow market rates, but they are also very “illiquid” (hard to sell or otherwise turn into cash). If a bank makes a home loan for a specific house, it can’t easily get that money back immediately.
    • B. Bond-like instruments, like corporate debt. These are typically fixed interest rate, but they are liquid (easy to sell).

Now, what happens when interest rates go up? Deposits and bonds rates don’t really change much, but loan interest rates rise. This is an “endowment effect” that leads banks, all else being stable, to make more money when interests rise: their “Net Interest Income” (NII) rises as interest rates rise.

Great! Next question, what happens if, for some reason, a lot of depositors want their money back at once? The bank would eventually run out of cash reserves, and need to sell some bonds (as the loans are hard to sell). But here’s a problem: bonds have a fixed interest rate, but their market value decreases when interest rates rise, because new investors would rather buy new bonds offering a higher rate, than your old low-rate bonds. When a bank holds a bond to maturity, that’s not a problem — they get back the full face value of the bond. But, if a bond needs to be sold early, and the interest rates have risen, the seller will take a loss. At worst case, a bank being forced to sell lots of bonds could make a huge loss, which overwhelms its capital reserves, and leaves it insolvent.

Normally, this is irrelevant, as this only happens if a bank has to sell bonds early, i.e., has a massive outflow of deposits, a bank run. There are many mechanisms to prevent this:

  • deep relationships between the bank and it’s customers;
  • a wide variety of depositors, many of whom don’t really follow the finer points of financial news and so are fairly “sticky”;
  • deposit insurance;
  • capital buffers, regulatory supervision, risk modelling, etc etc.;
  • and hedges. Let’s talk about these.

Clearly, it would conceptually be useful for banks to be able to deploy cash in instruments that have both floating interest rates (and so do not lose market value when interest rates rise), and also highly liquid. You could imagine two ways to do that:

  1. Make loans more liquid, by, let’s say, packaging groups of similar loans into standardised instruments (call them “CDOs”), splitting them into tranches by risk, getting ratings agencies to rate them, and then create a liquid market for them. There’s a problem with though: it removes the risk from the loan originators, leading to perverse incentives that lead to bad quality loans, and you get the 2008 financial crisis. So, let’s not do this.
  2. Make bonds that don’t lose market value when interest rates rise. This can, broadly, be done by banks through hedging on interest rates. Then, when interest rates rise, the bonds lose market value, but the hedges make money to roughly counteract that effect, and vice-verse. This is a great idea, in general!

So why did SVB not have hedges in place? It seems that they were worried about what happens when interest rates fall: if hedges make money when rates rise, they obviously lose money when rates fall. Combined with the negative endowment effect on loans, this can make falling rates pretty bad for bank profitability. So, it seems that SVB dismantled much of their hedging in 2022, to take profits and to avoid losses if/when rates fell again. And this would have been fine, as long as we didn’t get both a rise in interest rates and a lot of depositors wanting their money back. Of course, that’s then what happened, and clearly the bank’s risk scenario testing was insufficient.

So let’s put this together into what led to SVB’s collapse:

  1. An (unrealised, theoretical) mark-to-market loss on bond holdings, due to:
    • lots of bonds relative to loans, at SVB, due to their client base of startups being relatively cash-rich and loan-light
    • insufficient hedging, due to concerns of the impact of hedges on profitability if rates were to fall.
  2. An unprecedented drop in deposits, due to:
    • a depositor base suddenly becoming less cash-rich, due to the sudden slowdown in VC funding to startups
    • a depositor base unusually prone to runs, because most of it was in deposits that exceeded the deposit insurance maximums, and came from depositors that were NOT diverse, as most startups (and especially their VC shareholders) were on the same Whatsapp groups
    • modern banking apps making it way easier to move cash out of a bank — no more queueing on the steps of the bank
    • some communication accidents and mistakes that flagged the theoretical massive losses on the bank’s bond holdings at market price.
  3. An inability to find extra liquidity to cover the gap:
    • SVB tried to raise further equity, but this failed and just contributed to the communication of the point just above, i.e., accelerated the deposit flight
    • emergency funding from the Fed, backed by bond holdings, would have had to have been done at market prices for bonds, thereby realising the theoretical mark-to-market losses, and leading to insolvency. Catch-22!

So my guess is, we’ll see regulatory changes and/or focus on requiring banks to model the impact of interest rate changes, not only on profitability and cash flow, but also on a bank’s ability to liquidate assets at short notice, without taking prohibitive market price losses.

More Predictions

It’s time for some more predictions! Last time I did that (My predictions for the next 10, 20, 30 years) I was, if anything, too conservative Although, the section of “Black Swans” at the bottom was particularly accurate… (and has NOT been edited since it was written).

Photo by Drew Beamer on Unsplash

So here goes! Where do I think common opinion is wrong, especially in South Africa?

Electric vehicles and renewables

  1. By 2030, 40%+ of new cars sold in South Africa will be pure electric, with East Africa (eg Kenya) a bit behind, and West Africa a bit further behind. This seems inevitable when looking at the promises from major Western car companies, and the pace of innovation and falling prices from Chinese car companies.
  2. Due to 1., by 2030 the demand for petrol in South Africa will be falling 3%+ per year; and electricity demand will be growing 1% per year due to electric cars (though it may be falling for other reasons).
  3. Despite decarbonisation of electricity, and the growth of electric cars, the price of electricity in major global markets will NOT rise significantly from today’s levels, and may even fall, due to rapid rollout of solar and other renewables. South Africa is a special case depending on Eskom’s finances.
  4. By 2030, there will be large businesses built on taking advantage of near-free electricity during sunny hours (e.g., bulk hydrogen production), in several global markets.
  5. There will never again be a major (>300MW) coal power station built in South Africa.
  6. Kusile coal power-station will stop operating (or at least have been converted off coal) well before 30 years (2050), despite a design lifetime of 50+ years (around 2070). Which means an even bigger disaster for its return on capital.

Consumer trends over next 10-15 years

  1. The distinction between FMCG company / “brand” (designs and coordinates the manufacture of products, high margins, high marketing spend %, little direct consumer interaction) and retailer (sells products from brands, low margin, high volume,  low marketing spend %) will continue to blur in both directions, to an effective spectrum; plus there will be new logistics business models beyond traditional retailers, that aggregate deliveries from multiple other players (i.e., Instacart model evolved further).
  2. Traditional monolithic brands will fragment in favour of increasing numbers of niche brands with more authenticity and story. New “meta-brands” will appear, in the form of structured ranges of endorsements by influencers.
  3.  By 2030, 20%+ of “meat-like” products sold in upper-end grocery stores will be plant-based (i.e., non-animal).
  4.  By 2035, we will routinely take individualised medical probiotics in order to tune our gut biota, as treatment for a wide variety of complaints.


  1. By 2035, it will be functionally impossible for “legitimate” companies and individuals to use tax havens and financial engineering to pay near-zero taxes on profits or income.
  2. There are fortunes that will still be made in simplifying the payment of paper (or PDF) invoices, using machine learning text recognition to automatically load payment requests via bank apps/APIs. This will happen far faster than we can persuade people to stop using paper-based invoices for billing.

Why does the start up industry beat corporates?

We seem to have, today, an unparalleled explosion in young, new companies, pioneering new products or ways of doing business, and thereby disrupting seemingly invincible pillars of our economy through explosive growth — commonly called startups. How is this possible?

Photo by Ian Schneider on Unsplash

Startups face a seemingly impossible challenge: they seek to build successful businesses from nothing. To do so, they need products that are so much better than alternatives that customers choose to use the new products, despite the lack of any brand recognition. These products need to be built on a shoe-string budget (at least initially), and quickly, by a team of founders that are working with limited resources, limited structures and few established commercial relationships. How can this ever work? Why don’t bigger companies, with access to all the same new technologies, lots of resources and skilled staff, a brand, and sales and marketing teams, win every time?

The answer often comes down to two things: startups have a completely crazy idea that actually works, and/or they are unreasonably good at something.

Continue reading “Why does the start up industry beat corporates?”

“Africa at work” report finally published

The report I’ve spent quite a few months working on has been published — Africa at work: Job creation and inclusive growth. We look at the state of employment in Africa, and what needs to be done to create more wage-paying jobs. It’s awesome to see it getting lots of media attention, but also just good to get it out — it was a lot of work!

In other news, Claire and I are back in Johannesburg after a great year in London and a month of travel in Europe. I’m on a leave of absence for another month or so, still enjoying a more relaxed life!

The energy challenge

I just went to the first of a new lecture series at Caltech, NRG 0.1, during which various experts are going to be discussing various aspects of the energy problem (for which read “challenge”) that the world is facing.

This week was Steve Koonin, former Caltech provost and physics professor, and currently chief scientist for BP. I thought it was an excellent talk, covering a lot of the different aspects to the energy question, and some important principles that need to be kept in mind when looking for solutions in the near and medium term. I particularly enjoyed (and, yes, this probably says something about me too) how the talk assembled a large collection of numbers into a few key “back-of-the-envelope” facts, and then analysed the various options in terms of these constraints. While I’m not going to summarise the whole talk (which will hopefully be available here soon), here are some of the things which stood out:

2050 / twice pre-industrial
By BP’s Business as Usual (BAU) analysis, sometime before 2050 CO2 will hit twice pre-industrial atmospheric levels. This is a tipping point in many models, and so serves as a useful “safe” upper limit. Anything we do has to have a big effect well before 2050.

Running out of oil vs. global warming
A few years ago I was more concerned about the former; now I think I’m more concerned about the latter. The global economy is handling the high oil prices very well, so non-conventional oil, like oil sands in Canada, really start to look accessible. Oil prices may stay high, and national concerns about oil supply security may discourage oil use, but I think it’s here for a few more decades. My take home message: global warming will be solved, or not, before oil runs out.

CO2 has to drop hugely
CO2 has a lifetime of many centuries once it’s in the atmosphere. Thus to reach CO2 stability at twice pre-industrial levels by 2050, we actually need to cut emissions by about half from today’s level. (A useful figure: due to CO2 longevity, a drop of 10% in CO2 emissions growth delays by about 7 years the crossing of any given atmospheric CO2 concentration). But by business as usual estimates, economic growth, even including historically extrapolated improvements in efficiency, will have raised emissions by a factor of 4. So we have to improve somehow by a factor of 8. As Koonin points out, efficiency gains are generally overwhelmed by increased consumption.

CO2 drops have to start now
As CO2 stays in the atmosphere, delaying change by a few years’ delay makes the required drops much larger in future. Furthermore, the main drivers of emissions (power plants, houses, cars, etc.) all have lifetimes of decades — so the power plants being built now will still be emitting by 2050. Basically, if nothing dramatic changes in the next 5 to 10 years, stability by 2050 becomes nearly impossible.

Many “solutions” just don’t scale
There’s huge enthusiasm for corn-based biofuels in the US at the moment. Koonin’s figures were that about 20% of the corn crop is now going to fuels, contributing about 2% of the US’s transport fuel needs. This doesn’t scale to solve the problem. Another example: solar. It’s a lot more expensive, and so will never be accepted commercially. But even if it was, we need to cover (if I recall the figure) a million rooftops with solar panels every year, starting right now, to reach stability by 2050. I’m not sure if that was globally or just the US.

$30/ton CO2
Currently, emitting CO2 is free in most places (Europe is a partial exception). That makes coal the cheapest power source. Most emissions reduction schemes assign a cost, one way or another, to CO2. Koonin had an interesting comparison graph: below about $20/ton CO2, coal remains cheapest. Above about $40/ton, there are no further major changes to the ordering of energy sources. So the magic number of balancing economic cost and yet still changing behaviour is around $30/ton. This would add only about 15% to the cost of petrol in the US or SA, and a little less in Europe, say. So the biggest changes will be in fixed electrical generation plants (which anyway are the biggest emitters).

The plan
Koonin’s take on matters, and I think I agree, is that given the size and cost of the changes needed, as well as their urgency, market forces have to be used to make changes. That is, we can’t pick an “ideal solution” and decree that that is what will be done — the political will isn’t there over the time scale required. Rather, the correct policy incentives need to be put in place right now — like a fixed, predictable cost for CO2 (which, interestingly, argues against a cap-and-trade approach), for the next 50 years. Without such definiteness, it becomes really hard for power companies to spend, say, an extra billion dollars now on a power plant that does CO2 sequestration.

Koonin’s roadmap would seem to be: policy incentives right now, leading to CO2 sequestering power plants still running predominantly off fossil fuels; a growing but still far from dominant contribution from sustainable power sources; and revolutionary improvements in next generation biofuels (using plant material that we do not, in fact, want to eat). He justifies hope in a biofuel revolution by pointing out that biotechnology is a very young and rapidly developing field — unlike, say, fusion. He also thinks there’s a chance for a solar revolution, but not with current technology.

As I overheard a participant say on the way out, though, “He could have given a much more pessimistic talk with the exact same slides”. We do have to make immediate, dramatic changes to an area of human endeavour that has vast pre-existing infrastructure, very long time-lines and huge costs. This for a problem that is hard to easily demonstrate now, and exists over a time scale far longer than political cycles. I think there’s a fair chance that, come 2050, we’ll have to be involved in some sort of huge active geoengineering (ie. a modification designed to “cancel out” our CO2 emissions), in order to stabilise the climate.

Why “Peak Oil” isn’t what really worries me

To flog a dead horse, here’s another post on oil depletion. This one is a few thoughts, mostly rebuttals to some points that have arisen about the validity of the argument around “Peak Oil” — that we’re a few years away from the greatest oil production we’ll ever see, and it’s downhill from there.

This post follows from my post on Price Elasticity of Oil, as well as this post on blogwaffe, and a whole collection of excellent, but scary, posts on Ted Brenner’s blog.

One of the more common replies to Peak Oil concerns is that oil production is not merely a function of how much oil there is in the ground, but rather a raft of other factors — such as the price of oil (determing what deposits are economical to drill), technology, investment in expanding existing fields, and political stability. I have two points here: the problems of keeping up with demand, and what higher prices mean.
Continue reading “Why “Peak Oil” isn’t what really worries me”

Price elasticity of oil

A post on blogwaffe has reminded me of some of the economic implications of oil depletion, in areas like production of plastics.

As rightly pointed out there, many uses of oil have alternatives that could be pursued, such as plant-based synthesis for plastic. But many areas of oil use, particularly agriculture and some types of transport, would require huge societal shifts to move to alternatives. Unfortunately, however, the economics of oil depletion are not going to help that shift.
Continue reading “Price elasticity of oil”

Kyoto Treaty takes force

Yesterday, Wednesday 16 February, marked the entry into force of the Kyoto Treaty, designed to control and reduce the global emission of greenhouse gasses.

Of course, the most noteworthy part of the whole thing is that the US, producer of about a quarter of the world’s greenhouse gasses, is not a signatory. Nevertheless, it’s great to see the EU, most notably, committed to controlling emissions.

There’s lots that could be said, and perhaps I will if people are interested. But for now, I’ll comment on the three most commonly used criticisms of the treaty: developing country exceptions; effects on economic growth; and effectiveness in reducing emissions.
Continue reading “Kyoto Treaty takes force”