I’ve just finished reading Nassim Nicholas Taleb’s (NNT’s) recent book “Antifragile” (UK link, US link). A brief and organised summary of some of the key ideas is here on the Edge website. The book is disorganised and chock-a-block with interesting ideas and judgemental corollaries.
NNT describes a triad of properties. First: fragility. A fragile object is one that is liable to break. He defines this according to a second order property mapping its response to variability. This derivative function is called convexity. An object is fragile if application of major force damages it more than a series of minor shocks. This is true even when the cumulative force applied through minor shocks equals that from the major shock. So if we relate variation in applied forces to harm we have a convex function. The kind of fragility in which harm is a linear function of force is probably quite rare NNT asserts (and this seems correct given the degrees of freedom in any real system).
Second: robustness. Here you expect that increases in force do not cause increases in harm: one is simply not a function of the other. Third: anti-fragility, and this term is NNT’s. Here there is “positive convexity” as harm decreases with increasing force. An anti-fragile object benefits from mishandling. If you are graphically minded see here or buy his whole book which has the interesting innovation of a graphical tour in one of its appendices.
NNT points out that in many situations convexity will be both positive and negative across different ranges. An organism, for example, may be anti-fragile to physical exertion up to some point and fragile beyond this. What exercises NNT is the fact that people do not attend to convexity. This is because he sees the same logic in epistemology. Errors in models are not distributed linearly. For example, a naively estimated low probability is more likely to be under- than over-estimated (maybe a Bayesian can tell me something here?).
The epistemic point generalises. NNT emphasises the opacity of the tails of distributions and notes that in fragile/anti-fragile systems the consequences of this inevitable ignorance are inauspicious/propitious. It therefore stands to reason that statistically acquired knowledge is inferior to hedging. It makes more sense to reshape your responses to maximize potential benefits and minimize potential losses – or to seek optionality. The action should be in modifying the second order responses, not predicting the first order behaviour from partial information. This point is well made and even leads to some sassy advice (summarized here – recommend speed read) which might be well attended to by researchers (a species hated by NNT – especially when in receipt of public funds). It also constitutes a fascinating defence of sceptical empiricism against rationalism.
My response to an empiricism v. rationalism conflict is to firmly sit on the fence – which is what I think we should do as scientists and what we do in fact do as a moderately intelligent species. The problem with NNT’s argument is that it argues for the unknowability of a primary (and fat-tailed) distribution, while presuming that the second order function (e.g., harm caused) can be estimated more easily (this is most clearly illustrated in the Edge summary). This is silly because scepticism cuts deeper than this – you can never be sure that you have hedged correctly. For example, perhaps you take out insurance on your household, but have you factored in the risks of political revolution? Science achieves its success partly because it tempers rationalism with empiricism and empiricism with theory. NNT believes in “aggressive tinkering” and “convex bricolage”, as do I (sounds so cool), but seems to believe that this activity is somehow sui generis. If he was listening to episode one of Lisa Jardine’s latest history of science radio programme, I suspect he’d say that Robert Hooke was a genius and Isaac Newton an imbecile. The trick is obviously to be sceptical about theory, but not dismissive.
I am also unhappy with many of NNT’s statements about biology. Partly this is because he seems to think researchers are monomaniacal theorizers (something I object to as an experimental evolutionist). On first reading I became irritated as several statements appeared to invoke group selection. I will not adumbrate this debate because it is both subtle and tedious, but also because I think his conceptual error is more circuitous.
NNT invokes the anti-fragility of nature supposedly imparted by its long exposure to variability, but is subtle enough to note that fragility manifests itself at extreme values. He supposes the (undirected) solution to this is hierarchical structures in which fragile units (e.g., organisms) impart anti-fragility up the scale. While it may have some validity, I suspect this is a new sort of scala naturae. Unreferenced are the limits of natural selection attributable to population size or to the power of individual-level selection (both of which can lead to decreases in population mean fitness, e.g., under mutation load or Fisherian sex ratio selection). Also absent is the point that soft selection is common and can favour individual bet hedging (or even, God forbid, cognition). Simply put nature is more messy than this. I like fractals, but trees also have galls and hacked-off branches.
In summary, I think NNT has done us a great service by warning us not to be suckers to “knowledge”. Many of his points about tail opacity and illegitimate inference might apply to certain areas of genomics as much as to the world of economics (which is famously fond of formalisms). I believe his arguments have greatest heft whenever complex systems are modelled in complex ways (although I have yet to figure out how or if they apply to agent-based modelling or Bayesian inference). His iconoclasm is also helpful – sometimes you should trash the textbook, but I recommend first reading it. His health advice is good in parts (e.g., don’t smoke), but potty in others (randomly do tonnes of exercise). The danger, a new kind of sucker’s game (!), is to forgo the benefits of domain-specific knowledge in favour of generalised scepticism and heuristics based on dodgy general intuitions.