Complexity and abstraction

One of the key challenges that arises in almost any context is how to deal with complexity. Probably the most powerful tool we have to address it is abstraction — ignoring details in favour of a smaller set of information, at some “higher level”. I’ve been thinking recently about how abstraction appears, with varying success, in so many areas, and in particular what differentiates different problem areas in which abstraction is more or less effectives. This two-part post will meander vaguely through a few of those areas.

I’ll start, appropriately, in physics. Thermodynamics is an excellent example of abstraction at it’s best — macroscopic quantities of gas, say, have on the order of 10^27 molecules, but we can describe their behaviour very well using only the variables temperature, pressure and volume. Thermodynamics passes two test I’d like to propose for the appropriateness of abstraction:

  1. There exists a useful cutoff scale. Considering the behaviour of the gas at “macroscopic” length scales is a well-defined, useful definition, since the molecules are so much smaller.
  2. The different length scales decouple well; or to paraphrase, there is little behaviour loss from ignoring interactions at a smaller scale — thermodynamics describes the gas really well.


Physics is full of such examples — in fact, many physicists consider it a very non-trivial fact that physics can be split into so many different length scales, and described accurately at any of those length scales. Newtonian mechanics works universally at the human-scale level; quantum mechanics and field theory is superbly accurate at atomic lengths, even though we know it’s an incomplete theory. Indeed, quantum mechanics predicts infinite energy interactions at shorter lengths, but nevertheless our results are superbly accurate, even though we don’t even have a theory that deals convincingly with those infinities. It is, in fact, a useful approach to actually model the dimension of “scale” as a physical dimension, so that physics at different energies actually happens at different places in this fifth dimension.

But to consider complexity more generally, physics is also very ameniable to abstraction in complexity of the number of objects involved. Physics can be done very nicely by considering the interaction of only two or three particles at a time. This is, in my opinion, largely the reason that “serious” physics is much older as a discipline than most other sciences: most physics can plausibly be done by individuals, without computers. As a result, it’s been “tractable” for far longer than other disciplines I’ll discuss later.

There are a few cases where one or both of the requirements above don’t hold, and it’s those areas of phyics that are particularly hard. The surface of a black hole is a region of such strong curvature and energies that the normal approximations of long-range behaviour that general relativity makes don’t hold, and we need (somehow) to take into account quantum effects. Thus there is no useful cutoff that we can define, and the complexity problem makes general relativity inapplicable. On the other hand, in the photoelectric effect (electric current generated by electrons being excited by incoming light) we have a useful cutoff — we can measure current and light intensity. However, in taking a macroscopic cutoff, we lose important information: because photon energies and electron orbits are quantised, increasing the intensity of the light increases the number, not the energy, of the electrons produced. This observation was vital to the birth of quantum mechanics.

Abstraction is one of the cornerstones of modern computer science and application design. Object-oriented programming (OOP) is now used universally in large programs. OOP directly addresses criterion 1 above (of cutoff scale) by requiring the programmer to split the program into “objects” that then communicate in a simple, well-defined manner with each other. Thus one defines a cutoff scale at the level of objects, making a program with no intrinsic cutoff scale ameniable to abstraction. Furthermore, it is a key requirement of well-designed objects that they have no hidden side effects — after taking the cutoff, a lot of design work is put into making sure that criterion 2 above is satisfied too. So in fact much of the improvement in techniques of software engineering in recent decades have resulted just from making programming ameniable to abstraction.

Economics is a great example of a discipline where clear cutoffs exist, but where predictions are less than ideal as a result behaviour loss, or leaking of effects from behind the abstraction. For example, economists typically use the model of a perfectly rational person (homo economicus) to construct models of economic behaviour. So, for example, one defines a price vs. demand curve, showing demand for a good (item) at any particular price level. One assumes this is a decreasing function — as price rises, demand falls. This is often the case, but a lot of recent work has been looking at cases where it isn’t. For example, it turns out that people are, in some situations, more risk-averse than the odds should suggest. Furthermore, sometimes higher price can be used by consumers as a signal of quality, and can actually increase demand. The effects of consumers having imperfect information, or showing brand loyalty, or even following fashion, are all significant but are very hard to include in the simple abstractions that have traditionally been used.

In the second part of this marathon post, I’ll look at why biology, and even more so psychology, have much greater complexity problems than physics (yes, it’s because abstraction doesn’t work well). Also, I’ll look at how the human brain uses abstraction absolutely everywhere — and some of the funny effects and failures that arise from behaviours leaking through from behind the abstraction (criterion 2 above). I’d be interested in thoughts that anyone has on other areas that show interesting applications of abstraction!

6 thoughts on “Complexity and abstraction

  1. Very good post! I look forward to reading the second part. However, I have one criticism and one addition.

    Criticism: At Standard Temperature and Pressure (0 C, 1 Atm), one mole (6.02×10^23 molecules) of gas occupies 22.4 L of space. Thus, I would say that your cutoff for macroscopic quantities of gas (10^27 molecules) is several orders of magnitude too high. Is there a reason you picked this number?

    Addition: An example of where higher prices result in higher demand: Name brand foods versus store brand foods. Consider Kellogg’s Corn Flakes versus Ralph’s brand corn flakes. The Ralph’s brand flakes are probably made by Kellogg’s, thus there is no difference, nutritional- or taste-wise, between the two. However, most people will shun the store brand cereal and go straight for the name brand cereal. The only difference between the two is price (name brand is higher) and packaging (name brand is fancier).

    Like

  2. Yes, there’s a very good reason I picked 10^27: I thought that was how large a mole was. But you quite rightly point out that it’s more like 10^23. Oops. I blame it on inflation.

    Thanks for the comment!

    Like

  3. One mole == Number of carbon-12 atoms in 0.012 kg of carbon-12 ~= 6.0221415(10) x 10^23 particles

    Wikipedia: Mole
    Wikipedia: Avogadro’s Number
    National Mole Day Foundation

    Incidentally, I would put the cutoff for “macroscopic” amounts of gas at much smaller than one mole. In fact, consider a liter of very low pressure gas (I’m talking millitorr here) at non-cryogenic temperatures. Gas at that pressure contains much less than one mole of particles, but the typical gas laws will still apply. To tell the truth, I’m not quite sure where to put the cutoff above which gas can be considered “bulk”.

    Like

  4. You’ll need the cutoff to be an intensive property. In fact, the cutoff related to any abstraction of physical laws should always be dimensionless. In the case of thermodynamics (well… let me talk statistical mechanics) one typical parameter is proportional to the average distance between particles divided by the typical wavelength of a particle (as I recall, this number is actually reported using densities => ^3… and perhaps inverted).

    If that parameter is large, things simplify nicely (everyone likes classical mechanics and counting problems right?). Pack things too tightly, though, and you need to worry about serious quantum effects. Of course, a different level of abstraction, as you call it, would just try to average these quantum effects (to low order) and roll them into some other (it better be dimensionless!) parameter.

    This is all a little naive, though, since you obviously can’t get classical thermodynamics out of just one particle. So there’s also an absolute number requirement as you suggest Paul, and over which Adam mulls. How do we turn such an (extensive) requirement into an intensive (and dimensionless) number? Stastics to the rescue again. It’s related to the relative distribution of the particles’ energies. A tight distribution, and you’ve got a well defined thermodynamic Temperature. That is, only one parameter is really needed to specify the distribution: it’s average. A broad one, and you’ve got to do more work; you need more parameters.

    Like

  5. In a greater scheme of things it is always in balance, including at the national, racial, religious and profession level(s).

    In order to acheive the Buddha (Avtar) mind body states, you should correct the issues in the past.

    The mind (brain) is non linear.

    We are generally finite beings.

    Like

  6. When reality seems really complicated, one can always abstract away all the nitty-gritty linear details of logic or ordered discourse. Apparently.

    “Generally finite beings” — does that mean sometimes we aren’t? Or that some of us aren’t?

    Like

Comments are closed.