Wind extinguishes a candle and energizes fire. Likewise with randomness, uncertainty, chaos: you want to use them, not hide from them. You want to be the fire and wish for the wind.
Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.
The antifragile loves randomness and uncertainty, which also means--crucially--a love of errors, a certain class of errors. Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them--and do them well. Let me be more aggressive: we are largely better at doing than we are at thinking, thanks to antifragility. I'd rather be dumb and antifragile than extremely smart and fragile, any time.
It is far easier to figure out if something is fragile than to predict the occurrence of an event that may harm it. Fragility can be measured; risk is not measurable (outside of casinos or the minds of people who call themselves "risk experts").
Anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
This is the tragedy of modernity: as with neurotically overprotective parents, those trying to help are often hurting us the most.
Antifragile Tinkering, Bricolage: A certain class of trial and error, with small errors being "the right" kind of mistakes. All equivalent to rational flâneur.
At no point in history have so many non-risk-takers, that is, those with no personal exposure, exerted so much control. The chief ethical rule is the following: Thou shalt not have antifragility at the expense of the fragility of others.
Black Swans hijack our brains, making us feel we "sort of" or "almost" predicted them, because they are retrospectively explainable.
You get pseudo-order when you seek order; you only get a measure of order and control when you embrace randomness.
Thanks to the fragilista, modern culture has been increasingly building blindness to the mysterious, the impenetrable, what Nietzsche called the Dionysian, in life.
In short, the fragilista (medical, economic, social planning) is one who makes you engage in policies and actions, all artificial, in which the benefits are small and visible, and the side effects potentially severe and invisible.
Simplicity has been difficult to implement in modern life because it is against the spirit of a certain brand of people who seek sophistication so they can justify their profession.
Everything nonlinear in response is either fragile or antifragile to a certain source of randomness.
Doxastic Commitment, or "Soul in the Game": You must only believe predictions and opinions by those who committed themselves to a certain belief, and had something to lose, in a way to pay a cost in being wrong.
My experience is that money and transactions purify relations; ideas and abstract matters like "recognition" and "credit" warp them, creating an atmosphere of perpetual rivalry.
Via negativa: In theology and philosophy, the focus on what something is not, an indirect definition. In action, it is a recipe for what to avoid, what not to do--subtraction, not addition, say, in medicine.
If you want to become antifragile, put yourself in the situation "loves mistakes"--to the right of "hates mistakes"--by making these numerous and small in harm. We will call this process and approach the "barbell" strategy.
Barbell Strategy: A dual strategy, a combination of two extremes, one safe and one speculative, deemed more robust than a "monomodal" strategy; often a necessary condition for antifragility. For instance, in biological systems, the equivalent of marrying an accountant and having an occasional fling with a rock star; for a writer, getting a stable sinecure and writing without the pressures of the market during spare time. Even trial and error are a form of barbell.
We can now also see the domain dependence of our minds, a "domain" being an area or category of activity. Some people can understand an idea in one domain, say, medicine, and fail to recognize it in another, say, socioeconomic life. Or they get it in the classroom, but not in the more complicated texture of the street. Humans somehow fail to recognize situations outside the contexts in which they usually learn about them.
The excess energy released from overreaction to setbacks is what innovates!
In spite of the visibility of the counterevidence, and the wisdom you can pick up free of charge from the ancients (or grandmothers), moderns try today to create inventions from situations of comfort, safety, and predictability instead of accepting the notion that "necessity really is the mother of invention." Many, like the great Roman statesman Cato the Censor, looked at comfort, almost any form of comfort, as a road to waste. He did not like it when we had it too easy, as he worried about the weakening of the will.
It is said that the best horses lose when they compete with slower ones, and win against better rivals. Undercompensation from the absence of a stressor, inverse hormesis, absence of challenge, degrades the best of the best.
Most humans manage to squander their free time, as free time makes them dysfunctional, lazy, and unmotivated--the busier they get, the more active they are at other tasks. Overcompensation, here again.
One should have enough self-control to make the audience work hard to listen, which causes them to switch into intellectual overdrive. This paradox of attention has been a little bit investigated: there is empirical evidence of the effect of "disfluency." Mental effort moves us into higher gear, activating more vigorous and more analytical brain machinery. This little bit of effort seems to activate the switch between two distinct mental systems, one intuitive and the other analytical, what psychologists call "system 1" and "system 2." The same or a similar mechanism of overcompensation makes us concentrate better in the presence of a modicum of background random noise, as if the act of countering such noise helps us hone our mental focus.
Layers of redundancy are the central risk management property of natural systems. Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens--usually.
A simple rule of thumb (a heuristic): to estimate the quality of research, take the caliber of the highest detractor, or the caliber of the lowest detractor whom the author answers in print--whichever is lower.
Some jobs and professions are fragile to reputational harm, something that in the age of the Internet cannot possibly be controlled--these jobs aren't worth having. You do not want to "control" your reputation; you won't be able to do it by controlling information flow. Instead, focus on altering your exposure, say, by putting yourself in a position impervious to reputational damage. Or even put yourself in a situation to benefit from the antifragility of information. With few exceptions, those who dress outrageously are robust or even antifragile in reputation; those clean-shaven types who dress in suits and ties are fragile to information about them.
When you don't have debt you don't care about your reputation in economics circles--and somehow it is only when you don't care about your reputation that you tend to have a good one. Just as in matters of seduction, people lend the most to those who need them the least.
It is quite perplexing that those from whom we have benefited the most aren't those who have tried to help us (say with "advice") but rather those who have actively tried--but eventually failed--to harm us.
Machines are harmed by low-level stressors (material fatigue), organisms are harmed by the absence of low-level stressors (hormesis).
In the complex world, the notion of "cause" itself is suspect; it is either nearly impossible to detect or not really defined--another reason to ignore newspapers, with their constant supply of causes for things.
Our antifragilities have conditions. The frequency of stressors matters a bit. Humans tend to do better with acute than with chronic stressors, particularly when the former are followed by ample time for recovery, which allows the stressors to do their jobs as messengers.
Touristification: The attempt to suck randomness out of life. Applies to soccer moms, Washington civil servants, strategic planners, social engineers, "nudge" manipulators, etc. Opposite: rational flâneur.
Rational flâneur (or just flâneur): Someone who, unlike a tourist, makes a decision opportunistically at every step to revise his schedule (or his destination) so he can imbibe things based on new information obtained. In research and entrepreneurship, being a flâneur is called "looking for optionality." A non-narrative approach to life.
Some parts on the inside of a system may be required to be fragile in order to make the system antifragile as a result. Or the organism itself might be fragile, but the information encoded in the genes reproducing it will be antifragile.
So, in a way, while hormesis corresponds to situations by which the individual organism benefits from direct harm to itself, evolution occurs when harm makes the individual organism perish and the benefits are transferred to others, the surviving ones, and future generations.
We can simplify the relationships between fragility, errors, and antifragility as follows. When you are fragile, you depend on things following the exact planned course, with as little deviation as possible--for deviations are more harmful than helpful. This is why the fragile needs to be very predictive in its approach, and, conversely, predictive systems cause fragility. When you want deviations, and you don't care about the possible dispersion of outcomes that the future can bring, since most will be helpful, you are antifragile. Further, the random element in trial and error is not quite random, if it is carried out rationally, using error as a source of information. If every trial provides you with information about what does not work, you start zooming in on a solution--so every attempt becomes more valuable, more like an expense than an error. And of course you make discoveries along the way.
The engineer and historian of engineering Henry Petroski presents a very elegant point. Had the Titanic not had that famous accident, as fatal as it was, we would have kept building larger and larger ocean liners and the next disaster would have been even more tragic. So the people who perished were sacrificed for the greater good; they unarguably saved more lives than were lost. The story of the Titanic illustrates the difference between gains for the system and harm to some of its individual parts.
My characterization of a loser is someone who, after making a mistake, doesn't introspect, doesn't exploit it, feels embarrassed and defensive rather than enriched with a new piece of information, and tries to explain why he made the mistake rather than moving on. These types often consider themselves the "victims" of some large plot, a bad boss, or bad weather. Finally, a thought. He who has never sinned is less reliable than he who has only sinned once. And someone who has made plenty of errors--though never the same error more than once--is more reliable than someone who has never made any.
Nietzsche's famous expression "what does not kill me makes me stronger" can be easily misinterpreted as meaning Mithridatization or hormesis. It may be one of these two phenomena, very possible, but it could as well mean "what did not kill me did not make me stronger, but spared me because I am stronger than others; but it killed others and the average population is now stronger because the weak are gone." In other words, I passed an exit exam.
This is the central illusion in life: that randomness is risky, that it is a bad thing--and that eliminating randomness is done by eliminating randomness.
For a self-employed person, a small (nonterminal) mistake is information, valuable information, one that directs him in his adaptive approach; for someone employed, a mistake is something that goes into his permanent record, filed in the personnel department. Yogi Berra once said: "We made the wrong mistake"--and for the employed all mistakes are wrong mistakes. Nature loves small errors (without which genetic variations are impossible), humans don't--hence when you rely on human judgment you are at the mercy of a mental bias that disfavors antifragility.
The more variability you observe in a system, the less Black Swan– prone it is.
The ancients perfected the method of random draw in more or less difficult situations--and integrated it into divinations. These draws were really meant to pick a random exit without having to make a decision, so one would not have to live with the burden of the consequences later. You went with what the gods told you to do, so you would not have to second-guess yourself later. I will repeat until I get hoarse: the ancients evolved hidden and sophisticated ways and tricks to exploit randomness.
To summarize, the problem with artificially suppressed volatility is not just that the system tends to become extremely fragile; it is that, at the same time, it exhibits no visible risks. Also remember that volatility is information. In fact, these systems tend to be too calm and exhibit minimal variability as silent risks accumulate beneath the surface.
One of life's packages: no stability without volatility.
Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one's profession, by selecting one in which one does not have to fight one's impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making.
The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part, called the signal); hence the higher the noise-to-signal ratio.
Just as we are not likely to mistake a bear for a stone (but likely to mistake a stone for a bear), it is almost impossible for someone rational, with a clear, uninfected mind, someone who is not drowning in data, to mistake a vital signal, one that matters for his survival, for noise--unless he is overanxious, oversensitive, and neurotic, hence distracted and confused by other messages. Significant signals have a way to reach you.
In an ancestral environment, the anecdote, the "interesting," is information; today, no longer. Likewise, by presenting us with explanations and theories, the media induce an illusion of understanding the world.
The best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible. This is hard to accept in the age of the Internet.
You can control fragility a lot more than you think. So let us refine in three points: (i) Since detecting (anti)fragility is easier, much easier, than prediction and understanding the dynamics of events, the entire mission reduces to the central principle of what to do to minimize harm (and maximize gain) from forecasting errors, that is, to have things that don't fall apart, or even benefit, when we make a mistake. (ii) We do not want to change the world for now (leave that to the Soviet-Harvard utopists and other fragilistas); we should first make things more robust to defects and forecast errors, or even exploit these errors, making lemonade out of the lemons. (iii) As for the lemonade, it looks as if history is in the business of making it out of lemons; antifragility is necessarily how things move forward under the mother of all stressors, called time.
Alas, men of leisure become slaves to inner feelings of dissatisfaction and interests over which they have little control. The freer Nero's time, the more compelled he felt to compensate for lost time in filling gaps in his natural interests, things that he wanted to know a bit deeper. And, as he discovered, the worst thing one can do to feel one knows things a bit deeper is to try to go into them a bit deeper. The sea gets deeper as you go further into it, according to a Venetian proverb.
Nero enjoyed taking long walks in old cities, without a map. He used the following method to detouristify his traveling: he tried to inject some randomness into his schedule by never deciding on the next destination until he had spent some time in the first one, driving his travel agent crazy--when he was in Zagreb, his next destination would be determined by his state of mind while in Zagreb. Largely, it was the smell of places that drew him to them; smell cannot be conveyed in a catalogue.
When you become rich, the pain of losing your fortune exceeds the emotional gain of getting additional wealth, so you start living under continuous emotional threat.
To show how eminently modern this is, I will next reveal how I've applied this brand of Stoicism to wrest back psychological control of the randomness of life. When I was a trader, a profession rife with a high dose of randomness, with continuous psychological harm that drills deep into one's soul, I would go through the mental exercise of assuming every morning that the worst possible thing had actually happened--the rest of the day would be a bonus. Actually the method of mentally adjusting "to the worst" had advantages way beyond the therapeutic, as it made me take a certain class of risks for which the worst case is clear and unambiguous, with limited and known downside. It is hard to stick to a good discipline of mental write-off when things are going well, yet that's when one needs the discipline the most. Moreover, once in a while, I travel, Seneca-style, in uncomfortable circumstances.
An intelligent life is all about such emotional positioning to eliminate the sting of harm, which as we saw is done by mentally writing off belongings so one does not feel any pain from losses. The volatility of the world no longer affects you negatively.
My idea of the modern Stoic sage is someone who transforms fear into prudence, pain into information, mistakes into initiation, and desire into undertaking.
Fragility implies more to lose than to gain, equals more downside than upside, equals (unfavorable) asymmetry and Antifragility implies more to gain than to lose, equals more upside than downside, equals (favorable) asymmetry You are antifragile for a source of volatility if potential gains exceed potential losses (and vice versa). Further, if you have more upside than downside, then you may be harmed by lack of volatility and stressors.
If something is fragile, its risk of breaking makes anything you do to improve it or make it "efficient" inconsequential unless you first reduce that risk of breaking.
I initially used the image of the barbell to describe a dual attitude of playing it safe in some areas (robust to negative Black Swans) and taking a lot of small risks in others (open to positive Black Swans), hence achieving antifragility. That is extreme risk aversion on one side and extreme risk loving on the other, rather than just the "medium" or the beastly "moderate" risk attitude that in fact is a sucker game (because medium risks can be subjected to huge measurement errors). But the barbell also results, because of its construction, in the reduction of downside risk--the elimination of the risk of ruin. For antifragility is the combination aggressiveness plus paranoia--clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself. We saw Seneca's asymmetry: more upside than downside can come simply from the reduction of extreme downside (emotional harm) rather than improving things in the middle.
A barbell strategy with respect to randomness results in achieving antifragility thanks to the mitigation of fragility, the clipping of downside risks of harm--reduced pain from adverse events, while keeping the benefits of potential gains.
Just as Stoicism is the domestication, not the elimination, of emotions, so is the barbell a domestication, not the elimination, of uncertainty.
Let us call here the teleological fallacy the illusion that you know exactly where you are going, and that you knew exactly where you were going in the past, and that others have succeeded in the past by knowing where they were going.
Consider this simple heuristic: your work and ideas, whether in politics, the arts, or other domains, are antifragile if, instead of having one hundred percent of the people finding your mission acceptable or mildly commendable, you are better off having a high percentage of people disliking you and your message (even intensely), combined with a low percentage of extremely loyal and enthusiastic supporters. Options like dispersion of outcomes and don't care about the average too much.
If you "have optionality," you don't have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. For you don't have to be right that often. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur. (The key is that your assessment doesn't need to be made beforehand, only after the outcome.)
Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest--and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality--the right to pick and choose his story--is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.
Parties are great for optionality.
In ancient times, learning was for learning's sake, to make someone a good person, worth talking to, not to increase the stock of gold in the city's heavily guarded coffers. Entrepreneurs, particularly those in technical jobs, are not necessarily the best people to have dinner with. I recall a heuristic I used in my previous profession when hiring people (called "separate those who, when they go to a museum, look at the Cézanne on the wall from those who focus on the contents of the trash can"): the more interesting their conversation, the more cultured they are, the more they will be trapped into thinking that they are effective at what they are doing in real business (something psychologists call the halo effect, the mistake of thinking that skills in, say, skiing translate unfailingly into skills in managing a pottery workshop or a bank department, or that a good chess player would be a good strategist in real life). Clearly, it is unrigorous to equate skills at doing with skills at talking. My experience of good practitioners is that they can be totally incomprehensible--they do not have to put much energy into turning their insights and internal coherence into elegant style and narratives. Entrepreneurs are selected to be just doers, not thinkers, and doers do, they don't talk, and it would be unfair, wrong, and downright insulting to measure them in the talk department. The same with artisans: the quality lies in their product, not their conversation--in fact they can easily have false beliefs that, as a side effect (inverse iatrogenics), lead them to make better products, so what?
There is something (here, perception, ideas, theories) and a function of something (here, a price or reality, or something real). The conflation problem is to mistake one for the other, forgetting that there is a "function" and that such function has different properties. Now, the more asymmetries there are between the something and the function of something, then the more difference there is between the two. They may end up having nothing to do with each other.
The theory is the child of the cure, not the opposite--ex cura theoria nascitur.
Recall that the steam engine had been discovered and developed by the Greeks some two millennia before the Industrial Revolution. It is just that things that are implemented tend to want to be born from practice, not theory.
Payoffs from research are from Extremistan; they follow a power-law type of statistical distribution, with big, near-unlimited upside but, because of optionality, limited downside. Consequently, payoff from research should necessarily be linear to number of trials, not total funds involved in the trials. Since the winner will have an explosive payoff, uncapped, the right approach requires a certain style of blind funding. It means the right policy would be what is called "one divided by n" or "1/N" style, spreading attempts in as large a number of trials as possible: if you face n options, invest in all of them in equal amounts. Small amounts per trial, lots of trials, broader than you want. Why? Because in Extremistan, it is more important to be in something in a small amount than to miss it. As one venture capitalist told me: "The payoff can be so large that you can't afford not to be in everything."
In the antifragile case (of positive asymmetries, positive Black Swan businesses), such as trial and error, the sample track record will tend to underestimate the long-term average; it will hide the qualities, not the defects. In the fragile case of negative asymmetries (turkey problems), the sample track record will tend to underestimate the long-term average; it will hide the defects and display the qualities.
Let me stop to issue rules based on the chapter so far. (i) Look for optionality; in fact, rank things according to optionality, (ii) preferably with open-ended, not closed-ended, payoffs; (iii) Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more (an idea that is part of the modus operandi of the venture capitalist Marc Andreessen); one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so; (iv) Make sure you are barbelled, whatever that means in your business.
Provided we have the right type of rigor, we need randomness, mess, adventures, uncertainty, self-discovery, near-traumatic episodes, all these things that make life worth living, compared to the structured, fake, and ineffective life of an empty-suit CEO with a preset schedule and an alarm clock.
There is such a thing as nonnerdy applied mathematics: find a problem first, and figure out the math that works for it (just as one acquires language), rather than study in a vacuum through theorems and artificial examples, then change reality to make it look like these examples.
The need to focus on the payoff from your actions instead of studying the structure of the world (or understanding the "True" and the "False") has been largely missed in intellectual history. Horribly missed. The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.
For the fragile, shocks bring higher harm as their intensity increases (up to a certain level).
For the fragile, the cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock.
For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).
A better way to understand convexity and concavity. What curves outward looks like a smile--what curves inward makes a sad face. The convex (left) is antifragile, the concave (right) is fragile (has negative convexity effects).
So here is something to use. The technique, a simple heuristic called the fragility (and antifragility) detection heuristic, works as follows. Let's say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect).
Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less. The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here lies the power of optionality--your function of something is very convex, so you can be wrong and still do fine--the more uncertainty, the better. This explains my statement that you can be dumb and antifragile and still do very well.
Let me summarize the argument: if you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.
Antifragility implies--contrary to initial instinct--that the old is superior to the new, and much more than you think. No matter how something looks to your intellectual machinery, or how well or poorly it narrates, time will know more about its fragilities and break it when necessary.
For the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
Amateurs in any discipline are the best, if you can connect with them. Unlike dilettantes, career professionals are to knowledge what prostitutes are to love.
veritas odium parit--truth brings hatred.
Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
The psychologist Gerd Gigerenzer has a simple heuristic. Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.
Suckers try to win arguments, nonsuckers try to win.
There is a certain property of data: in large data sets, large deviations are vastly more attributable to noise (or variance) than to information (or signal).
Living things are long volatility. The best way to verify that you are alive is by checking if you like variations. Remember that food would not have a taste if it weren't for hunger; results are meaningless without effort, joy without sadness, convictions without uncertainty, and an ethical life isn't so when stripped of personal risks.